Code Monkey home page Code Monkey logo

k8s-sidecar's Introduction

GitHub release (latest SemVer) Release Docker Pulls Docker Image Size (latest semver)

What?

This is a docker container intended to run inside a kubernetes cluster to collect config maps with a specified label and store the included files in an local folder. It can also send an HTTP request to a specified URL after a configmap change. The main target is to be run as a sidecar container to supply an application with information from the cluster.

Why?

This is our simple way to provide files from configmaps or secrets to a service and keep them updated during runtime.

How?

Run the container created by this repo together with your application in a single pod with a shared volume. Specify which label should be monitored and where the files should be stored. By adding additional env variables the container can send an HTTP request to specified URL.

Where?

Images are available at:

All are identical multi-arch images built for amd64, arm64, arm/v7, ppc64le and s390x

Features

  • Extract files from config maps and secrets
  • Filter based on label
  • Update/Delete on change of configmap or secret
  • Enforce unique filenames
  • CI tests for k8s v1.21-v1.29
  • Support binaryData for both Secret and ConfigMap kinds
    • Binary data content is base64 decoded before generating the file on disk
    • Values can also be base64 encoded URLs that download binary data e.g. executables
      • The key in the ConfigMap/Secret must end with ".url" (see)

Usage

Example for a simple deployment can be found in example.yaml. Depending on the cluster setup you have to grant yourself admin rights first:

kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin   --user $(gcloud config get-value account)

One can override the default directory that files are copied into using a configmap annotation defined by the environment variable FOLDER_ANNOTATION (if not present it will default to k8s-sidecar-target-directory). The sidecar will attempt to create directories defined by configmaps if they are not present. Example configmap annotation:

metadata:
  annotations:
    k8s-sidecar-target-directory: "/path/to/target/directory"

If the filename ends with .url suffix, the content will be processed as a URL which the target file contents will be downloaded from.

Configuration CLI Flags

name description required default type
--req-username-file Path to file containing username to use for basic authentication for requests to REQ_URL and for *.url triggered requests. This overrides the REQ_USERNAME false - string
--req-password-file Path to file containing password to use for basic authentication for requests to REQ_URL and for *.url triggered requests. This overrides the REQ_PASSWORD false - string

Configuration Environment Variables

name description required default type
LABEL Label that should be used for filtering true - string
LABEL_VALUE The value for the label you want to filter your resources on. Don't set a value to filter by any value false - string
FOLDER Folder where the files should be placed true - string
FOLDER_ANNOTATION The annotation the sidecar will look for in configmaps to override the destination folder for files. The annotation value can be either an absolute or a relative path. Relative paths will be relative to FOLDER. false k8s-sidecar-target-directory string
NAMESPACE Comma separated list of namespaces. If specified, the sidecar will search for config-maps inside these namespaces. It's also possible to specify ALL to search in all namespaces. false namespace in which the sidecar is running string
RESOURCE Resource type, which is monitored by the sidecar. Options: configmap, secret, both false configmap string
METHOD If METHOD is set to LIST, the sidecar will just list config-maps/secrets and exit. With SLEEP it will list all config-maps/secrets, then sleep for SLEEP_TIME seconds. Anything else will continuously watch for changes (see https://kubernetes.io/docs/reference/using-api/api-concepts/#efficient-detection-of-changes). false - string
SLEEP_TIME How many seconds to wait before updating config-maps/secrets when using SLEEP method. false 60 integer
REQ_URL URL to which send a request after a configmap/secret got reloaded false - URI
REQ_METHOD Request method GET or POST for requests tp REQ_URL false GET string
REQ_PAYLOAD If you use REQ_METHOD=POST you can also provide json payload false - json
REQ_RETRY_TOTAL Total number of retries to allow for any http request (*.url triggered requests, requests to REQ_URI and k8s api requests) false 5 integer
REQ_RETRY_CONNECT How many connection-related errors to retry on for any http request (*.url triggered requests, requests to REQ_URI and k8s api requests) false 10 integer
REQ_RETRY_READ How many times to retry on read errors for any http request (.url triggered requests, requests to REQ_URI and k8s api requests) false 5 integer
REQ_RETRY_BACKOFF_FACTOR A backoff factor to apply between attempts after the second try for any http request (.url triggered requests, requests to REQ_URI and k8s api requests) false 1.1 float
REQ_TIMEOUT How many seconds to wait for the server to send data before giving up for .url triggered requests or requests to REQ_URI (does not apply to k8s api requests) false 10 float
REQ_USERNAME Username to use for basic authentication for requests to REQ_URL and for *.url triggered requests false - string
REQ_PASSWORD Password to use for basic authentication for requests to REQ_URL and for *.url triggered requests false - string
REQ_BASIC_AUTH_ENCODING Which encoding to use for username and password as by default it's undefined (e.g. utf-8). false latin1 string
SCRIPT Absolute path to a script to execute after a configmap got reloaded. It runs before calls to REQ_URI. If the file is not executable it will be passed to sh. Otherwise it's executed as is. Shebangs known to work are #!/bin/sh and #!/usr/bin/env python false - string
ERROR_THROTTLE_SLEEP How many seconds to wait before watching resources again when an error occurs false 5 integer
SKIP_TLS_VERIFY Set to true to skip tls verification for kube api calls false - boolean
REQ_SKIP_TLS_VERIFY Set to true to skip tls verification for all HTTP requests (except the Kube API server, which are controlled by SKIP_TLS_VERIFY). false - boolean
UNIQUE_FILENAMES Set to true to produce unique filenames where duplicate data keys exist between ConfigMaps and/or Secrets within the same or multiple Namespaces. false false boolean
DEFAULT_FILE_MODE The default file system permission for every file. Use three digits (e.g. '500', '440', ...) false - string
KUBECONFIG if this is given and points to a file or ~/.kube/config is mounted k8s config will be loaded from this file, otherwise "incluster" k8s configuration is tried. false - string
ENABLE_5XX Set to true to enable pulling of 5XX response content from config map. Used in case if the filename ends with .url suffix (Please refer to the *.url feature here.) false - boolean
WATCH_SERVER_TIMEOUT polite request to the server, asking it to cleanly close watch connections after this amount of seconds (#85) false 60 integer
WATCH_CLIENT_TIMEOUT If you have a network outage dropping all packets with no RST/FIN, this is how many seconds your client waits on watches before realizing & dropping the connection. You can keep this number low. (#85) false 66 integer
IGNORE_ALREADY_PROCESSED Ignore already processed resource version. Avoid numerous checks on same unchanged resource. req kubernetes api >= v1.19 false false boolean
LOG_LEVEL Set the logging level. (DEBUG, INFO, WARN, ERROR, CRITICAL) false INFO string
LOG_FORMAT Set a log format. (JSON or LOGFMT) false JSON string
LOG_TZ Set the log timezone. (LOCAL or UTC) false LOCAL string
LOG_CONFIG Log configuration file path. If not configured, uses the default log config for backward compatibility support. When not configured LOG_LEVEL, LOG_FORMAT and LOG_TZ would be used. Refer to Python logging for log configuration. For sample configuration file refer to file examples/example_logconfig.yaml false - string

k8s-sidecar's People

Contributors

aslafy-z avatar avihaisam avatar axdotl avatar b13n1u avatar carlosjgp avatar chalapat avatar christiangeie avatar crenshaw-dev avatar dependabot[bot] avatar eirc avatar eloo avatar fbrousse avatar fr-ser avatar holmesb avatar jekkel avatar jkroepke avatar joshm91 avatar lorenzo-biava avatar mario-steinhoff-gcx avatar migueleliasweb avatar mmiller1 avatar monotek avatar naseemkullah avatar nhuray avatar pulledtim avatar raghavtan avatar svolland-csgroup avatar tomrk-esteam8 avatar venkatbvc avatar wasim-nihal avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

k8s-sidecar's Issues

Support for comma separated namespaces

It would be nice if NAMESPACE would take a comma separated list of namespaces:

export NAMESPACE=foo,bar

Would scan namespace foo AND bar for ConfigMaps

Connection Refused at List - K8S Client using localhost:80 instead of API Server IP:PORT

Just deployed Kube-Promehteus-Stack 13.13.1 which includes grafana chart 6.4.* which deployed kiwigrid/k8s-sidecar:1.10.6.
The init container boots up but fails due to a connection refused error:

[2021-03-02 21:50:50] Starting collector
[2021-03-02 21:50:50] No folder annotation was provided, defaulting to k8s-sidecar-target-directory
[2021-03-02 21:50:50] Selected resource type: ('secret', 'configmap')
[2021-03-02 21:50:50] Config for cluster api loaded...
[2021-03-02 21:50:50] Unique filenames will not be enforced.
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/urllib3/connection.py", line 159, in _new_conn
    conn = connection.create_connection(
  File "/usr/local/lib/python3.8/site-packages/urllib3/util/connection.py", line 84, in create_connection
    raise err
  File "/usr/local/lib/python3.8/site-packages/urllib3/util/connection.py", line 74, in create_connection
    sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 670, in urlopen
    httplib_response = self._make_request(
  File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 392, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/usr/local/lib/python3.8/http/client.py", line 1255, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/usr/local/lib/python3.8/http/client.py", line 1301, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/usr/local/lib/python3.8/http/client.py", line 1250, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/usr/local/lib/python3.8/http/client.py", line 1010, in _send_output
    self.send(msg)
  File "/usr/local/lib/python3.8/http/client.py", line 950, in send
    self.connect()
  File "/usr/local/lib/python3.8/site-packages/urllib3/connection.py", line 187, in connect
    conn = self._new_conn()
  File "/usr/local/lib/python3.8/site-packages/urllib3/connection.py", line 171, in _new_conn
    raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f136d514670>: Failed to establish a new connection: [Errno 111] Connection refused

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/app/sidecar.py", line 73, in <module>
    main()
  File "/app/sidecar.py", line 65, in main
    listResources(label, labelValue, targetFolder, url, method, payload,
  File "/app/resources.py", line 72, in listResources
    ret = getattr(v1, _list_namespaced[resource])(namespace=namespace, label_selector=labelSelector)
  File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api/core_v1_api.py", line 15938, in list_namespaced_secret
    return self.list_namespaced_secret_with_http_info(namespace, **kwargs)  # noqa: E501
  File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api/core_v1_api.py", line 16049, in list_namespaced_secret_with_http_info
    return self.api_client.call_api(
  File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 348, in call_api
    return self.__call_api(resource_path, method,
  File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 180, in __call_api
    response_data = self.request(
  File "/usr/local/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 373, in request
    return self.rest_client.GET(url,
  File "/usr/local/lib/python3.8/site-packages/kubernetes/client/rest.py", line 239, in GET
    return self.request("GET", url,
  File "/usr/local/lib/python3.8/site-packages/kubernetes/client/rest.py", line 212, in request
    r = self.pool_manager.request(method, url,
  File "/usr/local/lib/python3.8/site-packages/urllib3/request.py", line 75, in request
    return self.request_encode_url(
  File "/usr/local/lib/python3.8/site-packages/urllib3/request.py", line 97, in request_encode_url
    return self.urlopen(method, url, **extra_kw)
  File "/usr/local/lib/python3.8/site-packages/urllib3/poolmanager.py", line 336, in urlopen
    response = conn.urlopen(method, u.request_uri, **kw)
  File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 754, in urlopen
    return self.urlopen(
  File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 754, in urlopen
    return self.urlopen(
  File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 754, in urlopen
    return self.urlopen(
  File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 726, in urlopen
    retries = retries.increment(
  File "/usr/local/lib/python3.8/site-packages/urllib3/util/retry.py", line 446, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /api/v1/namespaces/telemetry-system/secrets?labelSelector=grafana_datasource (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f136d514670>: Failed to establish a new connection: [Errno 111] Connection refused'))

Looking at the init container spec it only starts up with this configuration

    Environment:
      METHOD:           LIST
      LABEL:            grafana_datasource
      FOLDER:           /etc/grafana/provisioning/datasources
      RESOURCE:         both
      SKIP_TLS_VERIFY:  true
    Mounts:
      /etc/grafana/provisioning/datasources from sc-datasources-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-prometheus-stack-grafana-token-j2kzz (ro)

Unsure how to make it use the correct address where it seems like it should be using the in cluster connection since "Config for cluster api loaded" was logged.

Avoid collision if same filename is used in different ConfigMaps

Based on my experience if the same filename is used in multiple ConfigMaps, the sidecar does not behave correctly as both resources end up creating/removing the same file.

In our case (prometheus-operator) this is very frequent scenario, because people tend to create new dashboards by copying the existing one and then replacing the actual grafana dashboard JSON payload.

While this can be seen as client-side issue, the sidecar would be more robust if it allowed enforcing isolation by making sure that individual resources cannot be in collision. For example by prepanding [namespace]_[name]_ to the filename created in the target folder. Of course this is only applicable to situations where the filename created can be arbitrary and sidecar is thus free to modify it.

Got this sqlite vulnerability when scanned with Twistlock

Vulnerabilities
---------------
Image                          ID                  CVE               Package    Version      Severity    Status                CVSS
-----                          --                  ---               -------    -------      --------    ------                ----
kiwigrid/k8s-sidecar:latest    234ccff9537fa5e5    CVE-2020-11655    sqlite     3.30.1-r1    high        fixed in 3.30.1-r2    7.5

Support Binary Files (gzip files, etc)

Hi

This project looks just like the thing I need to load my config into a shared volume for my application.

unfortunately some of my files can be large > 1-2mb so cannot be created, I know, seems a little too much for some text config, but at the moment I cannot reduce its content so I thought I would gzip them up and update my application to decompress those that need it.

Would it be possible to allow binary files to be loaded and saved using this project, specifically support compressed files (gzip, etc)?

Build Pipeline

Current the k8s-sidecar container is build by docker hub automated builds. This was initially a fast and easy approach, tests must be included too. Therefore a CircleCI Pipeline shall be added.

Offer for help / Feature request: Read/Watch secrets

Hello,

I wanted to implemented some functionality in an upstream project (grafana helm chart) and was looking for the ability to watch secrets as well as configmaps.

Currently this project does not seem to support that, but I would appreciate this functionality very much. I can also offer to implement it, in case this project is still active and this functionality fits into the scheme of things.

So... Want me to take a crack at enabling secret watching? I think this should not be a breaking change.

Multi-arch image

Would it be possible to provide multi-arch (especially amd64 and arm64 in my case) images on DockerHub. This images is used in prometheus-operator and I try to deploy it on a mixed arch cluster.

Sidecar crashes after a couple of days working

Hello everyone, I wish you could help me. My container sidecar have gone down after a few days running and I do not understand why.
Error: Back-off restarting failed container
Image: shadwell/k8s-sidecar:0.0.3
Full description of the sidecar:

/usr/local/lib/python3.6/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.25.2) or chardet (3.0.4) doesn't match a supported version!
  RequestsDependencyWarning)
2019-06-27 09:38:03 INFO     Starting config map collector
2019-06-27 09:38:03 INFO     label is: jenkins-jenkins-config
2019-06-27 09:38:03 INFO     targetFolder is: /var/jenkins_home/casc_configs
2019-06-27 09:38:03 INFO     Config for cluster api loaded...
2019-06-27 09:38:03 INFO     namespace is: jenkins
2019-06-27 09:38:03 INFO     ssh_port is: 1044
2019-06-27 09:38:03 INFO     admin_user is: saJenkinst02
2019-06-27 09:38:03 INFO     Jenkins is contactable, continuing.
2019-06-27 09:38:04 INFO     Configmap with label found
2019-06-27 09:38:04 INFO     Working on configmap jenkins/jenkins-jenkins-config-welcome-message
2019-06-27 09:38:04 INFO     File in configmap welcome-message.yaml ADDED
/usr/local/lib/python3.6/site-packages/paramiko/kex_ecdh_nist.py:39: CryptographyDeprecationWarning: encode_point has been deprecated on EllipticCurvePublicNumbers and will be removed in a future version. Please use EllipticCurvePublicKey.public_bytes to obtain both compressed and uncompressed point encoding.
  m.add_string(self.Q_C.public_numbers().encode_point())
/usr/local/lib/python3.6/site-packages/paramiko/kex_ecdh_nist.py:96: CryptographyDeprecationWarning: Support for unsafe construction of public numbers from encoded data will be removed in a future version. Please use EllipticCurvePublicKey.from_encoded_point
  self.curve, Q_S_bytes
/usr/local/lib/python3.6/site-packages/paramiko/kex_ecdh_nist.py:111: CryptographyDeprecationWarning: encode_point has been deprecated on EllipticCurvePublicNumbers and will be removed in a future version. Please use EllipticCurvePublicKey.public_bytes to obtain both compressed and uncompressed point encoding.
  hm.add_string(self.Q_C.public_numbers().encode_point())

Traceback (most recent call last):
  File "/app/sidecar.py", line 191, in <module>
    main()
  File "/app/sidecar.py", line 186, in main
    ssh_port)
  File "/app/sidecar.py", line 124, in watchForChanges
    jenkinsReloadConfig(admin_private_key, admin_user, ssh_port, logger)
  File "/app/sidecar.py", line 22, in jenkinsReloadConfig
    ssh_client.connect('127.0.0.1', port=ssh_port, username=admin_user, pkey=private_key)
  File "/usr/local/lib/python3.6/site-packages/paramiko/client.py", line 437, in connect
    passphrase,
  File "/usr/local/lib/python3.6/site-packages/paramiko/client.py", line 749, in _auth
    raise saved_exception
  File "/usr/local/lib/python3.6/site-packages/paramiko/client.py", line 649, in _auth
    self._transport.auth_publickey(username, pkey)
  File "/usr/local/lib/python3.6/site-packages/paramiko/transport.py", line 1507, in auth_publickey
    return self.auth_handler.wait_for_response(my_event)
  File "/usr/local/lib/python3.6/site-packages/paramiko/auth_handler.py", line 250, in wait_for_response
    raise e
paramiko.ssh_exception.AuthenticationException: Authentication failed.

Missing releases on Github

I can see two releases on Docker hub (0.1.20 being the latest one) which don't have a GitHub release. Can the releases still be created or can a new release be made which has both?

sidecar version 0.5.0 crashes with API Server Error. Which version is stable for this issue?

log is DEPRECATED and will be removed in a future version. Use logs instead.
/usr/lib/python3.6/site-packages/requests/init.py:91: RequestsDependencyWarning: urllib3 (1.25.1) or chardet (3.0.4) doesn't match a supported version!
RequestsDependencyWarning)
Starting config map collector
Config for cluster api loaded...
Traceback (most recent call last):
File "sidecar.py", line 96, in
main()
File "sidecar.py", line 92, in main
watchForChanges(label, targetFolder, url, method, payload)
File "sidecar.py", line 51, in watchForChanges
for event in stream:
File "/usr/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 128, in stream
resp = func(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/kubernetes/client/apis/core_v1_api.py", line 11854, in list_namespaced_config_map
(data) = self.list_namespaced_config_map_with_http_info(namespace, **kwargs)
File "/usr/lib/python3.6/site-packages/kubernetes/client/apis/core_v1_api.py", line 11957, in list_namespaced_config_map_with_http_info
collection_formats=collection_formats)
File "/usr/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 321, in call_api
_return_http_data_only, collection_formats, _preload_content, _request_timeout)
File "/usr/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 155, in __call_api
_request_timeout=_request_timeout)
File "/usr/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 342, in request
headers=headers)
File "/usr/lib/python3.6/site-packages/kubernetes/client/rest.py", line 231, in GET
query_params=query_params)
File "/usr/lib/python3.6/site-packages/kubernetes/client/rest.py", line 222, in request
raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (500)
Reason: Internal Server Error
HTTP response headers: HTTPHeaderDict({'Audit-Id': '335c1795-27f0-49b5-8020-2a16ba5a8fc9', 'Content-Type': 'application/json', 'Date': 'Mon, 26 Aug 2019 15:40:03 GMT', 'Content-Length': '186'})
HTTP response body: b'{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"resourceVersion: Invalid value: \"None\": strconv.ParseUint: parsing \"None\": invalid syntax","code":500}\n'

sidecar crashes with "message":"resourceVersion: Invalid value: \\"None\\"

Hi, is there a workaround for the problem mentioned here (apart from reverting to v0.0.3 of sidecar):
helm/charts#9136 (comment)

sidecar continually crashes with

File "/usr/local/lib/python3.6/site-packages/kubernetes/client/rest.py", line 222, in request
    raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (500)
Reason: Internal Server Error
HTTP response headers: HTTPHeaderDict({'Content-Type': 'application/json', 'Date': 'Mon, 10 Dec 2018 10:22:35 GMT', 'Content-Length': '186'})
HTTP response body: b'{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"resourceVersion: Invalid value: \\"None\\": strconv.ParseUint: parsing \\"None\\": invalid syntax","code":500}\n'

Re-created ConfigMaps are mistakenly deleted

Creating a ConfigMap, deleting it, then re-creating it leads to random deletion of files.

It seems kubernetes has a mechanism that sends a compacted transaction log (see kubernetes-client/python#693). The lack of control in k8s-sidecar leads to deletion of a file that is not wanted.

k8s-sidecar watches for any DELETED notification and blindly deletes the attached file without checking that the DELETED object is current.

This is causing the following issue : helm/charts#13362

====================================
Original issue as opened:

Using this sidecar container in prom-operator (came with it) and trying to understand why my dashboards disappear after a few minutes/hours.

sidecar environment :

    - name: LABEL
      value: grafana_dashboard
    - name: FOLDER
      value: /tmp/dashboards

At the point my dashboards disappeared :

$FOLDER Directory is empty :

$ k exec -it  prom-grafana-7896944b64-qw52f -c grafana-sc-dashboard bash
I have no name!@prom-grafana-7896944b64-qw52f:/app$ ls -al /tmp/dashboards
total 0
drwxrwsrwx. 2 root  472  6 Apr 26 20:21 .
drwxrwxrwt. 1 root root 24 Apr 26 19:47 ..

But ConfigMaps are present :

$ k get cm -l grafana_dashboard
NAME                                     DATA   AGE
prom-etcd                                1      21h
prom-k8s-cluster-rsrc-use                1      21h
prom-k8s-coredns                         1      21h
prom-k8s-node-rsrc-use                   1      21h
prom-k8s-resources-cluster               1      21h
prom-k8s-resources-namespace             1      21h
prom-k8s-resources-pod                   1      21h
prom-k8s-resources-workload              1      21h
prom-k8s-resources-workloads-namespace   1      21h
prom-nodes                               1      21h
prom-persistentvolumesusage              1      21h
prom-pods                                1      21h
prom-statefulset                         1      21h

Here is the full sidecar log :

Starting config map collector
No folder annotation was provided, defaulting to k8s-sidecar-target-directory
Config for cluster api loaded...
Working on configmap monitoring/prom-k8s-resources-pod
Configmap with label found
File in configmap k8s-resources-pod.json ADDED
Working on configmap monitoring/prom-k8s-resources-workloads-namespace
Configmap with label found
File in configmap k8s-resources-workloads-namespace.json ADDED
Working on configmap monitoring/prom-k8s-resources-workload
Configmap with label found
File in configmap k8s-resources-workload.json ADDED
Working on configmap monitoring/prometheus-prom-prometheus-rulefiles-0
Working on configmap monitoring/prom-grafana-datasource
Working on configmap monitoring/prom-grafana-test
Working on configmap monitoring/prom-grafana
Working on configmap monitoring/prom-k8s-cluster-rsrc-use
Configmap with label found
File in configmap k8s-cluster-rsrc-use.json ADDED
Working on configmap monitoring/prom-persistentvolumesusage
Configmap with label found
File in configmap persistentvolumesusage.json ADDED
Working on configmap monitoring/prom-k8s-resources-namespace
Configmap with label found
File in configmap k8s-resources-namespace.json ADDED
Working on configmap monitoring/prom-statefulset
Configmap with label found
File in configmap statefulset.json ADDED
Working on configmap monitoring/prom-k8s-resources-cluster
Configmap with label found
File in configmap k8s-resources-cluster.json ADDED
Working on configmap monitoring/prom-grafana-config-dashboards
Working on configmap monitoring/prom-nodes
Configmap with label found
File in configmap nodes.json ADDED
Working on configmap monitoring/prom-pods
Configmap with label found
File in configmap pods.json ADDED
Working on configmap monitoring/prom-k8s-coredns
Configmap with label found
File in configmap k8s-coredns.json ADDED
Working on configmap monitoring/prom-etcd
Configmap with label found
File in configmap etcd.json ADDED
Working on configmap monitoring/prom-k8s-node-rsrc-use
Configmap with label found
File in configmap k8s-node-rsrc-use.json ADDED
Working on configmap monitoring/prom-k8s-resources-cluster
Configmap with label found
File in configmap k8s-resources-cluster.json ADDED
Working on configmap monitoring/prom-k8s-resources-namespace
Configmap with label found
File in configmap k8s-resources-namespace.json ADDED
Working on configmap monitoring/prom-k8s-resources-pod
Configmap with label found
File in configmap k8s-resources-pod.json ADDED
Working on configmap monitoring/prom-k8s-resources-workload
Configmap with label found
File in configmap k8s-resources-workload.json ADDED
Working on configmap monitoring/prom-k8s-resources-workloads-namespace
Configmap with label found
File in configmap k8s-resources-workloads-namespace.json ADDED
Working on configmap monitoring/prom-nodes
Configmap with label found
File in configmap nodes.json ADDED
Working on configmap monitoring/prom-persistentvolumesusage
Configmap with label found
File in configmap persistentvolumesusage.json ADDED
Working on configmap monitoring/prom-pods
Configmap with label found
File in configmap pods.json ADDED
Working on configmap monitoring/prom-statefulset
Configmap with label found
File in configmap statefulset.json ADDED
Working on configmap monitoring/prometheus-prom-prometheus-rulefiles-0
Working on configmap monitoring/prometheus-prom-prometheus-operator-prometheus-rulefiles-0
Working on configmap monitoring/prometheus-prom-prometheus-operator-prometheus-rulefiles-0
Working on configmap monitoring/prometheus-prom-prometheus-rulefiles-0
Working on configmap monitoring/prometheus-prom-prometheus-rulefiles-0
Working on configmap monitoring/prom-prometheus-operator-grafana-datasource
Working on configmap monitoring/prom-prometheus-operator-etcd
Configmap with label found
File in configmap etcd.json DELETED
Working on configmap monitoring/prom-prometheus-operator-k8s-cluster-rsrc-use
Configmap with label found
File in configmap k8s-cluster-rsrc-use.json DELETED
Working on configmap monitoring/prometheus-prom-prometheus-operator-prometheus-rulefiles-0
Working on configmap monitoring/prom-prometheus-operator-k8s-coredns
Configmap with label found
File in configmap k8s-coredns.json DELETED
Working on configmap monitoring/prometheus-prom-prometheus-operator-prometheus-rulefiles-0
Working on configmap monitoring/prom-prometheus-operator-k8s-node-rsrc-use
Configmap with label found
File in configmap k8s-node-rsrc-use.json DELETED
Working on configmap monitoring/prom-prometheus-operator-k8s-resources-cluster
Configmap with label found
File in configmap k8s-resources-cluster.json DELETED
Working on configmap monitoring/prom-prometheus-operator-k8s-resources-namespace
Configmap with label found
File in configmap k8s-resources-namespace.json DELETED
Working on configmap monitoring/prom-prometheus-operator-k8s-resources-pod
Configmap with label found
File in configmap k8s-resources-pod.json DELETED
Working on configmap monitoring/prom-prometheus-operator-k8s-resources-workload
Configmap with label found
File in configmap k8s-resources-workload.json DELETED
Working on configmap monitoring/prom-prometheus-operator-k8s-resources-workloads-namespace
Configmap with label found
File in configmap k8s-resources-workloads-namespace.json DELETED
Working on configmap monitoring/prom-prometheus-operator-nodes
Configmap with label found
File in configmap nodes.json DELETED
Working on configmap monitoring/prom-prometheus-operator-persistentvolumesusage
Configmap with label found
File in configmap persistentvolumesusage.json DELETED
Working on configmap monitoring/prom-prometheus-operator-pods
Configmap with label found
File in configmap pods.json DELETED
Working on configmap monitoring/prom-prometheus-operator-statefulset
Configmap with label found
File in configmap statefulset.json DELETED
Working on configmap monitoring/prometheus-prom-prometheus-rulefiles-0
Working on configmap monitoring/prometheus-prom-prometheus-operator-prometheus-rulefiles-0
Working on configmap monitoring/prometheus-prom-prometheus-rulefiles-0

Note that some of the ConfigMaps mentioned I can't even find, not sure it's related.

The reason I open this issue is that in the end, I do have ConfigMaps with that label and they are not present in the directory.

ApiException when calling kubernetes: (500)

After upgrading the Prometheus-operator helm chart to 5.10.4, I got a lot of alerts: KubeAPIErrorsHigh with reource="configmaps", verb="WATCH"

image

After researching, I finally found it was caused by the k8s-sidecar

It appears randomly

Working on configmap monitoring/prometheus-operator-persistentvolumesusage
Configmap with label found
File in configmap persistentvolumesusage.json ADDED
ApiException when calling kubernetes: (500)
Reason: Internal Server Error
HTTP response headers: HTTPHeaderDict({'Content-Type': 'application/json', 'Date': 'Wed, 29 May 2019 06:33:40 GMT', 'Content-Length': '186'})
HTTP response body: b'{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"resourceVersion: Invalid value: \\"None\\": strconv.ParseUint: parsing \\"None\\": invalid syntax","code":500}\n'


Working on configmap monitoring/prometheus-operator-k8s-cluster-rsrc-use
Configmap with label found
File in configmap k8s-cluster-rsrc-use.json ADDED

In the meantime, the API server shows error logs

E0529 06:44:48.685737       1 status.go:64] apiserver received an error that is not an metav1.Status: storage.InvalidError{Errs:field.ErrorList{(*field.Error)(0xc42fad7f00)}}

Kubernetes Version: 1.11.5
Sidecar Version: 0.16
Helm Version: 2.11.0

Prometheus-operator chart version: 5.10.04

Is this a bug?

Anything I can do to help to fix ?

SSL: CERTIFICATE_VERIFY_FAILED when connects to kubernetes apiserver

Sidecar should provide an option to supply kubernetes apiserver ca certificate, as a configure map, secret, or via other means, and/or an option to turn off the verification of the apiserver ssl certificate.

This is the same as helm/charts#11287

The kubernetes apiserver CA certificate usually is not in the trusted certificate bundle thus the sidecar (image tag 0.0.6) would usually fail with the error documented in the referenced issue.
The option to supply the apiserver CA certificate or turn off the https server ssl verification need to be added to sidecar before the downstream helm charts can offer related options.

Recurring Trigger calls after ApiException Gone: too old resource version

I'm using version 0.1.259 as part of the jenkins helm chart.

The sidecar regularly keeps calling the post url in intervals of ~30 minutes. the logs show the following:

[2020-11-03 20:42:32] POST request sent to http://localhost:8080/reload-configuration-as-code/?casc-reload-token=k8s-jenkins-5b4f5d5856-5fftf. Response: 200 OK
[2020-11-03 21:13:39] ApiException when calling kubernetes: (410)
Reason: Gone: too old resource version: 156250745 (156278832)


[2020-11-03 21:13:44] Working on configmap ns/k8s-jenkins-casc-jobs
[2020-11-03 21:13:44] File in configmap jobs.yaml ADDED
[2020-11-03 21:13:44] Contents of jobs.yaml haven't changed. Not overwriting existing file

...

[2020-11-03 21:13:46] File in configmap tools.yaml ADDED
[2020-11-03 21:13:46] Contents of tools.yaml haven't changed. Not overwriting existing file
[2020-11-03 21:13:48] POST request sent to http://localhost:8080/reload-configuration-as-code/?casc-reload-token=k8s-jenkins-5b4f5d5856-5fftf. Response: 200 OK
[2020-11-03 22:00:48] ApiException when calling kubernetes: (410)
Reason: Gone: too old resource version: 156250745 (156284123)

This in turn causes to unnecessarily reload my configuration and run my seed jobs.

Best would be if those k8s watching issues did not occur.
Alternatively / additionally it would be nice if the POST request is only done if any file was changed during the run. The code already detects that nothing has changed ( Contents of jobs.yaml haven't changed. Not overwriting existing file) so it might be possible to reuse and aggregate this information and not call the URL.

Received unknown exception: 'utf-8' codec can't decode byte 0xfe in position 0: invalid start byte

The problem appear when I store the truly binary data in Secret (for ex. certs. xlsx). I know that I can use ConfigMap and binaryData filed but I want to get original binary in my container not the base64 encoded (is it normal for binaryData field?).

I made one change in the way of base64 decoding and now configmap -> binaryData works as I expected. But I don't know if it's a good idea to change this behaviour?

def _get_file_data_and_name(full_filename, content, resource, content_type="ascii"):
    if resource == "secret":
        file_data = base64.b64decode(content).decode()
    elif content_type == "binary":
        # file_data = base64.decodebytes(content.encode('ascii'))
        file_data = base64.b64decode(content) # <----- now the file in destination directory contain original binary data instead of base64 encoded  
    else:
        file_data = content

Feature request: multiple folders

Currently only copies to a single directory. Would be useful if this read a directory\folder label from each configmap to determine which directory on the destination pod's filesystem to create the file. Workaround of a sidecar per folder is a bit hacky.

Use-case: Grafana supports its own folder system for storing dashboards in its UI. It is dictated by the directory on the file-system the file is placed. Using this sidecar currently, all dashboards must be placed in the same folder.

Couldn't start Jenkins because of Docker rate limit

Hello,
as many of us know, Docker is enforcing rate limits on the amount of pulls we can do (https://docs.docker.com/docker-hub/download-rate-limit/#kubernetes). This led to us being unable to start Jenkins in kubernetes, due to jenkins having a dependency of this image. I just wondered if there were any plans to host this image on another repository than docker hub? Or is there any plans to apply for open source status, as mentioned in https://www.docker.com/blog/expanded-support-for-open-source-software-projects/?

Received unknown exception: 'V1Secret' object has no attribute 'binary_data'

The problem appear when I use Secrets and default METHOD - WATCH. I'm not python programmer but I think that this fragment of code can be the source of the problem:

resources.py -> _watch_resource_iterator

        data_map = {}
        if event["object"].data is not None:
            data_map.update(event["object"].data)

        if event["object"].binary_data is not None:  #this line
            data_map.update(event["object"].binary_data)

Support for sending the contents of a data key as POST payload to a URL

Unless I'm overlooking something (I do see it's possible to load data from a URL and store it in a file), I don't see a possibility of sending the content of a data key in a config map as the payload to a configured URL, using POST method.

I want to use this so I can automatically (declaratively) configure index templates in Elasticsearch for Kibana, by maintaining a ConfigMap containing the index template.

I might supply a PR for this myself.

folderAnnotation insufficient privileges

Hi guys,

I'm using grafana helm chart which comes with your image as a sidecar and having some difficulties with the folderAnnotation:

Error: insufficient privileges to create SRE. Skipping prometheus-overview.json.

here's the annotation on the configmap:

  annotations:                                                                                                                                                                                                                                                                      
    grafana-dashboard-folder: SRE

and the env var:

env:
- name: METHOD
- name: FOLDER_ANNOTATION                                                                                                                                                                                                                                                      
  value: grafana-dashboard-folder
- name: LABEL                                                                                                                                                                
  value: grafana_dashboard
- name: FOLDER
  value: /tmp/dashboards
- name: RESOURCE
  value: both                                                                                                                                                                                                                        image: kiwigrid/k8s-sidecar:1.1.0

also tried to exec into the container and create the folder manually, and it works. Not sure why the app is complaining about permissions.

/app $ id
uid=472 gid=472 groups=472
/app $ mkdir /tmp/dashboards/SRE ; echo $?
0
/app $

folder annotation invalid character set

I ran into an issue trying to use the folder annotation. It raises an error saying that label value contains invalid characters.

apiVersion: v1
kind: ConfigMap
metadata:
  name: sample-configmap
  labels:
    k8s-sidecar-target-directory: /var/lib/grafana/dashboards/test
data:
  hello.world:  Hello World!
The ConfigMap "sample-configmap" is invalid: metadata.labels: Invalid value: "/var/lib/grafana/dashboards/test": a valid label must be an empty string or consist of alphanumeric characters,
'-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyValue',  or 'my_value',  or '12345', regex used for validation is '(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?')

After digging a bit, I found that this is true if we refer to the k8s doc:
https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set
And it's what the k8s validation tool expects :
https://github.com/kubernetes/apimachinery/blob/4147c925140e4e3aa51d6b46b41650821712d457/pkg/util/validation/validation.go#L88

Is it possible to find a new way of setting folder, that makes k8s validation happy ?

Sidecar crashes on startup

We have the sidecar running as part of the stable/grafana chart. In some cases the sidecar will start crashlooping on startup with

Starting config map collector
Config for cluster api loaded...
2018-10-31 19:17:37,952 WARNING Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7ff3e7044d68>: Failed to establish a new connection: [Errno 113] No route to host',)': /api/v1/configmaps?watch=True
2018-10-31 19:17:41,020 WARNING Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7ff3e7044e80>: Failed to establish a new connection: [Errno 113] No route to host',)': /api/v1/configmaps?watch=True
2018-10-31 19:17:44,092 WARNING Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7ff3e7044f60>: Failed to establish a new connection: [Errno 113] No route to host',)': /api/v1/configmaps?watch=True
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/urllib3/connection.py", line 141, in _new_conn
    (self.host, self.port), self.timeout, **extra_kw)
  File "/usr/local/lib/python3.6/site-packages/urllib3/util/connection.py", line 83, in create_connection
    raise err
  File "/usr/local/lib/python3.6/site-packages/urllib3/util/connection.py", line 73, in create_connection
    sock.connect(sa)
OSError: [Errno 113] No route to host
​
During handling of the above exception, another exception occurred:
​
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 601, in urlopen
    chunked=chunked)
  File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 346, in _make_request
    self._validate_conn(conn)
  File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 850, in _validate_conn
    conn.connect()
  File "/usr/local/lib/python3.6/site-packages/urllib3/connection.py", line 284, in connect
    conn = self._new_conn()
  File "/usr/local/lib/python3.6/site-packages/urllib3/connection.py", line 150, in _new_conn
    self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.VerifiedHTTPSConnection object at 0x7ff3e705d0f0>: Failed to establish a new connection: [Errno 113] No route to host
​
During handling of the above exception, another exception occurred:
​
Traceback (most recent call last):
  File "/app/sidecar.py", line 58, in <module>
    main()
  File "/app/sidecar.py", line 54, in main
    watchForChanges(label, targetFolder)
  File "/app/sidecar.py", line 23, in watchForChanges
    for event in w.stream(v1.list_config_map_for_all_namespaces):
  File "/usr/local/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 122, in stream
    resp = func(*args, **kwargs)
  File "/usr/local/lib/python3.6/site-packages/kubernetes/client/apis/core_v1_api.py", line 11024, in list_config_map_for_all_namespaces
    (data) = self.list_config_map_for_all_namespaces_with_http_info(**kwargs)
  File "/usr/local/lib/python3.6/site-packages/kubernetes/client/apis/core_v1_api.py", line 11121, in list_config_map_for_all_namespaces_with_http_info
    collection_formats=collection_formats)
  File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 321, in call_api
    _return_http_data_only, collection_formats, _preload_content, _request_timeout)
  File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 155, in __call_api
    _request_timeout=_request_timeout)
  File "/usr/local/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 342, in request
    headers=headers)
  File "/usr/local/lib/python3.6/site-packages/kubernetes/client/rest.py", line 231, in GET
    query_params=query_params)
  File "/usr/local/lib/python3.6/site-packages/kubernetes/client/rest.py", line 205, in request
    headers=headers)
  File "/usr/local/lib/python3.6/site-packages/urllib3/request.py", line 66, in request
    **urlopen_kw)
  File "/usr/local/lib/python3.6/site-packages/urllib3/request.py", line 87, in request_encode_url
    return self.urlopen(method, url, **extra_kw)
  File "/usr/local/lib/python3.6/site-packages/urllib3/poolmanager.py", line 321, in urlopen
    response = conn.urlopen(method, u.request_uri, **kw)
  File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 668, in urlopen
    **response_kw)
  File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 668, in urlopen
    **response_kw)
  File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 668, in urlopen
    **response_kw)
  File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 639, in urlopen
    _stacktrace=sys.exc_info()[2])
  File "/usr/local/lib/python3.6/site-packages/urllib3/util/retry.py", line 388, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='10.50.128.1', port=443): Max retries exceeded with url: /api/v1/configmaps?watch=True (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7ff3e705d0f0>: Failed to establish a new connection: [Errno 113] No route to host',))

There's no way to provide environment variables to modify connection timeouts or do any more changes than what is hard-coded into the script.

BUG - "local variable 'res' referenced before assignment" when using REQ_URL

See these logs:

[2020-09-25 22:27:02] Starting collector
[2020-09-25 22:27:02] No folder annotation was provided, defaulting to k8s-sidecar-target-directory
[2020-09-25 22:27:02] Selected resource type: ('configmap',)
[2020-09-25 22:27:02] Config for cluster api loaded...
[2020-09-25 22:27:02] Unique filenames will not be enforced.
[2020-09-25 22:27:07] Working on configmap monitoring/beast-dashboard-live-nodes
[2020-09-25 22:27:07] File in configmap beast-live-nodes.yaml ADDED
[2020-09-25 22:27:07] Received unknown exception: local variable 'res' referenced before assignment

Config:

    image: kiwigrid/k8s-sidecar:0.1.209
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: beast-dashboard-volume
      mountPath: /grafana-dashboard-definitions/1/
    env:
    - name: NAMESPACE
      value: "ALL"
    - name: LABEL
      value: "beast_dashboard"
    - name: FOLDER
      value: "/grafana-dashboard-definitions/1/"
    - name: REQ_URL
      value: "http://localhost:3000/api/admin/provisioning/dashboards/reload"
    - name: REQ_USERNAME
      value: "admin"
    - name: REQ_METHOD
      value: "admin"
    - name: REQ_PASSWORD
      value: "admin"

Confused by the 'Why', k8s mounted ConfigMaps upload live

Hi, this sidecar looks like a useful tool regardless, but I was confused by the 'Why' in the README:

"Why?
Currently (April 2018) there is no simple way to hand files in configmaps to a service and keep them updated during runtime."

Files in ConfigMap (and Secrets) are updated live at runtime wherever they are mounted. That is their intrinsic property I think forever, but certainly in k8s 1.4+ in 2016. The only time they don't update live at runtime is if you mount them using the new subPath option or if you map them into environment variable.

https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/

Mounted ConfigMaps are updated automatically
When a ConfigMap already being consumed in a volume is updated, projected keys are eventually updated as well. Kubelet is checking whether the mounted ConfigMap is fresh on every periodic sync. However, it is using its local ttl-based cache for getting the current value of the ConfigMap. As a result, the total delay from the moment when the ConfigMap is updated to the moment when new keys are projected to the pod can be as long as kubelet sync period + ttl of ConfigMaps cache in kubelet.

Note: A container using a ConfigMap as a subPath volume will not receive ConfigMap updates.

I use this property all the time to live update configuration in running services. I use sidecars also, but using just to watch for the live updates and ping the service to re-read them.

This sidecar is probably more flexible that that built-in live update support, and handles the 'ping' bit too.

Publish docker image for version v1.5.0

Could you please publish a docker image for version v1.5.0?

This is related to the PR #102, which eliminates several security vulnerabilities in the latest (1.3.0) published docker image.

Signal handling

Hello!
It would be great if this sidecar would handle signals in some way to gracefully kill all processes.

If it seams like a useful feature and you want contributions, maybe I can fix it.

Append name of namespace and configmap to the filename

AFAIK, at this moment, k8s-sidecar will load to the same file, configs with the same name (cm.data.key), regardless being provided for a different configmap or from a different namespace.

Would be possible to append both the namespace and the configmap name to avoid this behaviour?

The alternative, seems uglier: would force us to append the namespace and configmap name to the cm.data.keys.

Sidecar crashes with IncompleteRead

Sidecar version: Issue has been observed on 0.0.3 and 0.0.5
Kubernetes version: 1.10.6 (AKS)

I am running Grafana (using helm stable release) with sidecar for datasources/dashboards (relevant docs)

After some time sidecar starts to crash (in a loop) with the following errors from the logs:

Starting config map collector
Config for cluster api loaded...
Working on configmap kube-system/xxx-grafana.v2
Working on configmap kube-system/xxx-grafana.v4
Working on configmap kube-system/xxx-grafana.v5
Working on configmap kube-system/xxx-grafana.v6
Working on configmap kube-system/addon-http-application-routing-tcp-services
Working on configmap kube-system/addon-http-application-routing-udp-services
Working on configmap kube-system/dashboards.v2
Working on configmap kube-system/xxx-grafana.v1
Working on configmap default/xxx-grafana-config-dashboards
Working on configmap kube-system/addon-http-application-routing-nginx-configuration
Working on configmap kube-system/omsagent-rs-config
Working on configmap kube-system/azureproxy-nginx
Working on configmap default/xxx-prometheus-server
Working on configmap kube-system/api.v1
Working on configmap kube-system/dashboards.v7
Working on configmap kube-system/dashboards.v1
Working on configmap kube-system/dashboards.v5
Working on configmap kube-system/image-storage.v1
Working on configmap kube-system/kubedns-kubecfg
Working on configmap kube-system/xxx-grafana.v3
Working on configmap default/api-grafana-dashboard
Configmap with label found
File in configmap api-dashboard.json ADDED
Working on configmap default/xxx-grafana
Working on configmap kube-system/azureproxy-config
Working on configmap kube-system/dashboards.v6
Working on configmap kube-system/xxx-prometheus.v1
Working on configmap kube-system/render.v1
Working on configmap default/prometheus-grafana-datasource
Working on configmap default/render-grafana-dashboard
Configmap with label found
File in configmap render-dashboard.json ADDED
Working on configmap default/xxx-prometheus-alertmanager
Working on configmap kube-system/dashboards.v3
Working on configmap kube-system/dashboards.v4
Working on configmap kube-system/heapster-config
Working on configmap default/image-storage-grafana-dashboard
Configmap with label found
File in configmap image-storage-dashboard.json ADDED
Working on configmap default/xxx-lb-traefik
Working on configmap kube-system/xxx-lb.v1
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 572, in _update_chunk_length
    self.chunk_left = int(line, 16)
ValueError: invalid literal for int() with base 16: b''

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 331, in _error_catcher
    yield
  File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 637, in read_chunked
    self._update_chunk_length()
  File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 576, in _update_chunk_length
    raise httplib.IncompleteRead(line)
http.client.IncompleteRead: IncompleteRead(0 bytes read)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/app/sidecar.py", line 96, in <module>
    main()
  File "/app/sidecar.py", line 92, in main
    watchForChanges(label, targetFolder, url, method, payload)
  File "/app/sidecar.py", line 51, in watchForChanges
    for event in stream:
  File "/usr/local/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 124, in stream
    for line in iter_resp_lines(resp):
  File "/usr/local/lib/python3.6/site-packages/kubernetes/watch/watch.py", line 45, in iter_resp_lines
    for seg in resp.read_chunked(decode_content=False):
  File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 665, in read_chunked
    self._original_response.close()
  File "/usr/local/lib/python3.6/contextlib.py", line 99, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.6/site-packages/urllib3/response.py", line 349, in _error_catcher
    raise ProtocolError('Connection broken: %r' % e, e)
urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))

The exception occurs at different points in time i.e. not always after processing the same ConfigMap.

Please let me know if you require any other information.

AttributeError: 'V1Secret' object has no attribute 'binary_data'

We are using this sidecar through the Grafana chart. Today, Grafana dind't start anymore. The sidecar fails when starting to process the first secret (not configmap):

 grafana-sc-notifiers [2021-02-19 11:53:22] Contents of cm-notifier.yaml haven't changed. Not overwriting existing file                       grafana-sc-notifiers Traceback (most recent call last):                                                                                     
 grafana-sc-notifiers   File "/app/sidecar.py", line 73, in <module>                                                                          grafana-sc-notifiers     main()                                                                                                              grafana-sc-notifiers   File "/app/sidecar.py", line 65, in main                                                                             
 grafana-sc-notifiers     listResources(label, labelValue, targetFolder, url, method, payload,                                                grafana-sc-notifiers   File "/app/resources.py", line 110, in listResources                                                                  grafana-sc-notifiers     if sec.binary_data is not None:                                                                                    
 grafana-sc-notifiers AttributeError: 'V1Secret' object has no attribute 'binary_data'      

Secrets do not have a binary_data attribute: https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/V1Secret.md. The code probably shouldn't check this for secrets.

Retries way too quickly

When you launch this sidecar next to a grafana container, you get a couple of pages of logs that the connection to k8s api is failing due to istio-proxy not having been initialised yet.

It would be great if this repo would implement an exponential decorrelated backoff to avoid this problem; wait, say 20 seconds, before crashing, too. That way, the grafana pod doesn't have to get 2 restarts before stabilising.

Watching stops working after 10 minutes

Watching a set of configmaps fails to be alerted of new changes after 10 minutes of no changes.

##Repro steps

  1. Install the following configmap into a cluster
apiVersion: v1
kind: ConfigMap
metadata:
  name: grafana-test-dashboard
  labels:
    grafana_dashboard: "1"
data:
  cm-test.json: {}
  1. Install the sidecar into the cluster
apiVersion: v1
kind: Pod
metadata:
  name: test
spec:
  containers:
    - env:
        - name: METHOD
        - name: LABEL
          value: grafana_dashboard
        - name: FOLDER
          value: /tmp/dashboards
        - name: RESOURCE
          value: both
      image: kiwigrid/k8s-sidecar:0.1.178
      imagePullPolicy: IfNotPresent
      name: grafana-sc-dashboard
  1. Wait 10 minutes
  2. Make a change to the config map and update in the cluster

Expected Behaviour

Will see a modification occur

Actual Behaviour

Nothing


Done on AKS with kubernetes version 1.16.10

Sidecar v0.0.18 crashes with API Server Error

The sidecar v0.0.18 periodically crashes with the following error

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
    self.run()
  File "/usr/local/lib/python3.7/multiprocessing/process.py", line 99, in run
    self._target(*self._args, **self._kwargs)
  File "/app/resources.py", line 120, in _watch_resource_loop
    _watch_resource_iterator(*args)
  File "/app/resources.py", line 82, in _watch_resource_iterator
    for event in stream:
  File "/usr/local/lib/python3.7/site-packages/kubernetes/watch/watch.py", line 128, in stream
    resp = func(*args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/kubernetes/client/apis/core_v1_api.py", line 11854, in list_namespaced_config_map
    (data) = self.list_namespaced_config_map_with_http_info(namespace, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/kubernetes/client/apis/core_v1_api.py", line 11957, in list_namespaced_config_map_with_http_info
    collection_formats=collection_formats)
  File "/usr/local/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 321, in call_api
    _return_http_data_only, collection_formats, _preload_content, _request_timeout)
  File "/usr/local/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 155, in __call_api
    _request_timeout=_request_timeout)
  File "/usr/local/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 342, in request
    headers=headers)
  File "/usr/local/lib/python3.7/site-packages/kubernetes/client/rest.py", line 231, in GET
    query_params=query_params)
  File "/usr/local/lib/python3.7/site-packages/kubernetes/client/rest.py", line 222, in request
    raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (500)
Reason: Internal Server Error
HTTP response headers: HTTPHeaderDict({'Content-Type': 'application/json', 'Date': 'Mon, 15 Jul 2019 18:17:16 GMT', 'Content-Length': '186'})
HTTP response body: b'{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"resourceVersion: Invalid value: \\"None\\": strconv.ParseUint: parsing \\"None\\": invalid syntax","code":500}\n'

Process for configmap died. Stopping and exiting
Traceback (most recent call last):
  File "/app/sidecar.py", line 54, in <module>
    main()
  File "/app/sidecar.py", line 50, in main
    payload, namespace, folderAnnotation, resources)
  File "/app/resources.py", line 155, in watchForChanges
    raise Exception("Loop died")
Exception: Loop died

Working with other side-cars

It would be great to provide workaround-hooks in this container.

Problem:

  • Given that Grafana only reads datasources on start
  • Given that GKE doesn't support anything but ReadWriteOnce persistency (else I'd spawn a Job with a shared PVC)
  • Given that I use Istio in the Grafana namespace
  • Given that Istio's istio-proxy sidecar container is needed to access to the k8s API
  • Given that I was using k8s-sidecar to list datasources from the k8s API as an init container
  • And given that k8s KEP #753 is not yet done
  • I have a crashing init sidecar
  • Or else if I move it to "containers" and make it "WATCH" instead of "LIST", I could have it run continously,
  • However, see point 1, so we have a race condition

A workaround:

  • Suppose this container exposed a HTTP interface that only returned 200 OK once "one watch iteration" was done
  • I could then do
command: ["/bin/bash", "-c"]
args: ["until curl --head localhost:15000 ; do echo Waiting for Sidecar; sleep 3 ; done ; echo Sidecar available; ./init-stuff.sh && ./startup.sh"]

Search only in current or specific namespace

Hi,
would it maybe be possible to make the sidecar configurable to search only in a specific or the actual namespace?
In our setup we use namespaces to separate teams and so we don't have access to search in all namespaces.. which is btw not neccessary in our usecase.. so simply set the namespace where the sidecar is looking for config maps would already fit our needs.

Whats you opinion on such a change?
Thanks

Configmaps containing files deployed to many namespaces causes premature delete

We are installing a configmap (containing a label that the sidecar picks up) to 2 different namespaces (namespaceA and namespaceB). WHEN the configmap is installed to namespaceA, the file in the configmap is added correctly to the specified directory. When the configmap is installed to namespaceB, the sidecar detects that the dashboard contained in the configmap is already discovered and is not added a second time... ALL is good until this point. When namespaceB is wiped of all of its resources the sidecar detects that the configmap is deleted, the file is deleted and is no longer available to namespaceA, despite the namespace persisting and containing the configmap.

The sidecar doesn't keep track that multiple deployments contain the configmap. Hence application that use the file contained in the configmap installed in namespace A are failing.

This issue has been discovered since we are installing a helm chart containing the configmap to a namespace that persists and to another namespace that is just present while tests are being run against it.

What you expected to happen:

The sidecar would detect and keep track of which releases contain configmaps which use the file, and only remove the file when all releases using that file are deleted

How to reproduce it (as minimally and precisely as possible):

  1. Install configmap describing file, with label detectable by the sidecar twice (two seperate releases)
  2. Delete one of the releases
  3. Check specified directory where files in configmap resources are to be written to.

Potential Fix

Store a map of {metadata.namespace}/{metadata.name} to file for each resource with the specified label, and only run delete if ALL entries mapped to that file are deleted

else:
filename = data_key[:-4] if data_key.endswith(".url") else data_key
removeFile(destFolder, filename)
if url is not None:
request(url, method, payload)

Prometheus timeouts

In our setup prometheus takes a few minutes to load up and be able to respond to requests. This also apparently produces a ReadTimeoutError which is not handled by the current retry configuration.

So two things proposed here:

  1. Add read parameter to enable retries on read errors (related docs).
  2. Increase max number of retries to 10 or more so it can retry for a few minutes.

I didn't send a PR since I'd prefer discussing the numbers first. I can send one as soon as there's an agreement.

EDIT: I guess prometheus is unrelated to this project so I shouldn't really mention it anyway. Also maybe these numbers could be exported as configurable values?

Crash: argument of type 'ApiException' is not iterable

After #15 I often get:

Traceback (most recent call last):
  File "/app/sidecar.py", line 168, in main
    watchForChanges(label, targetFolder, url, method, payload, namespace, folderAnnotation)
  File "/app/sidecar.py", line 99, in watchForChanges
    for event in stream:
  File "/usr/local/lib/python3.7/site-packages/kubernetes/watch/watch.py", line 128, in stream
    resp = func(*args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/kubernetes/client/apis/core_v1_api.py", line 11854, in list_namespaced_config_map
    (data) = self.list_namespaced_config_map_with_http_info(namespace, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/kubernetes/client/apis/core_v1_api.py", line 11957, in list_namespaced_config_map_with_http_info
    collection_formats=collection_formats)
  File "/usr/local/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 321, in call_api
    _return_http_data_only, collection_formats, _preload_content, _request_timeout)
  File "/usr/local/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 155, in __call_api
    _request_timeout=_request_timeout)
  File "/usr/local/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 342, in request
    headers=headers)
  File "/usr/local/lib/python3.7/site-packages/kubernetes/client/rest.py", line 231, in GET
    query_params=query_params)
  File "/usr/local/lib/python3.7/site-packages/kubernetes/client/rest.py", line 222, in request
    raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (500)
Reason: Internal Server Error
HTTP response headers: HTTPHeaderDict({'Content-Type': 'application/json', 'Date': 'Sun, 10 Mar 2019 19:29:47 GMT', 'Content-Length': '186'})
HTTP response body: b'{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"resourceVersion: Invalid value: \\"None\\": strconv.ParseUint: parsing \\"None\\": invalid syntax","code":500}\n'


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/app/sidecar.py", line 181, in <module>
    main()
  File "/app/sidecar.py", line 170, in main
    if "500" not in e:
TypeError: argument of type 'ApiException' is not iterable

Strict kubernetes client version dependency

Hi,

Currently the dependency requirement for the kubernetes python library is declared using strict version: kubernetes==8.0.1

Would it be possible to have this relaxed to allow newer versions of the library? I believe that the compatibility matrix of the kubernetes client generally allows using newer client with older server, so there should be no harm in relaxing the upper bound, right?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.