Code Monkey home page Code Monkey logo

secrets-provider-for-k8s's Introduction

Table of Contents

CyberArk Secrets Provider for Kubernetes

The CyberArk Secrets Provider for Kubernetes provides Kubernetes-based applications with access to secrets that are stored and managed in Conjur.

Consuming Secrets from CyberArk Secrets Provider

Using the CyberArk Secrets Provider, your applications can easily consume secrets that have been retrieved from Conjur in one of two ways:

  • Using Kubernetes Secrets: The Secrets Provider can populate Kubernetes Secrets with secrets stored in Conjur. This is sometimes referred to as "K8s Secrets" mode.
  • Using Secrets files: The Secrets Provider can generate initialization or credentials files for your application based on secrets retrieved from Conjur, and it can write those files to a volume that is shared with your application container. This is referred to as the Secrets Provider "Push to File" mode. For more information, see the Secrets Provider Push-to-File guide.

Deployment Modes

The Secrets Provider can be deployed into your Kubernetes cluster in one of two modes:

  • As an init container: The Secrets Provider can be deployed as a Kubernetes init container for each of your application Pods that requires secrets to be retrieved from Conjur. This configuration allows you to employ Conjur policy that authorizes access to Conjur secrets on a per-application-Pod basis.

  • As a standalone application container (Kubernetes Job): The Secrets Provider can be deployed as a separate, application container that runs to completion as part of a Kubernetes Job. In this mode, the Secrets Provider can support delivery of Conjur secrets to multiple application Pods. In this mode, you would use Conjur policy that authorizes access to Conjur secrets on a per-Secrets-Provider basis.

    The Secrets Provider Helm chart can be used to deploy the Secrets Provider in standalone application mode.

  • As a sidecar to enable secrets rotation.

NOTE: If you are using the Secrets Provider "Push to file" mode, the Secrets Provider must be deployed as an init or sidecar container, since these modes makes use of shared volumes to deliver secrets to an application.

Supported Services

  • Conjur Enterprise 11.1+

  • Conjur Open Source v1.4.2+

Supported Platforms

  • GKE

  • K8s 1.11+

  • Openshift v4.6-v4.8 (Conjur Enterprise only)

Using secrets-provider-for-k8s with Conjur Open Source

Are you using this project with Conjur Open Source? Then we strongly recommend choosing the version of this project to use from the latest Conjur OSS suite release. Conjur maintainers perform additional testing on the suite release versions to ensure compatibility. When possible, upgrade your Conjur version to match the latest suite release; when using integrations, choose the latest suite release that matches your Conjur version. For any questions, please contact us on Discourse.

Methods for Configuring CyberArk Secrets Provider

There are several methods available for configuring the CyberArk Secrets Provider:

  • Using Pod Environment Variables: The Secrets Provider can be configured by setting environment variables in a Pod manifest. To see a description of the Secrets Provider environment variables, and an example manifest in the Set up Secrets Provider as an Init Container section of the Secrets Provider documentation (expand the collapsible section in Step 6 of this guide to see details).

  • Using Pod Annotations: The Secrets Provider can be configured by setting Pod Annotations in a Pod manifest. For details on how Annotations can be used to configure the Secrets Provider, see the Secrets Provider Push-to-File guide.

  • Using the Secrets Provider Helm chart (Standalone Application Mode Only) If you are using the Secrets Provider in standalone application mode, then you can configure the Secrets Provider by setting Helm chart values and deploying Secrets Provider using the Secrets Provider Helm chart.

Some notes about the different configuration methods:

  1. For a setting that can be configured either by Pod Annotation or by environment variable, a Pod Annotation configuration takes precedence over the corresponding environment variable configuration.
  2. If you are using the Secrets Provider in Push-to-File mode, then the Secrets Provider must be configured via Pod Annotations.
  3. If you are using the Secrets Provider in Kubernetes Secrets mode, it is recommended that you use environment variable settings to configure the Secrets Provider.

Enabling Tracing

Tracing of CyberArk Secrets Provider for Kubernetes is available using the OpenTelemetry standard. Tracing is disabled by default. You can enable tracing using either Pod Annotations or environment variables. To enable traces appended to the init container's logs, add the annoation conjur.org/log-traces: true to the Pod manifest, or set the LOG_TRACES environment variable to true. To instead export the traces to a Jaeger server, use the following annotation: conjur.org/jaeger-collector-url: http://<jaeger-collector-host>/api/traces or use the JAEGER_COLLECTOR_URL environment variable. Traces will include errors to assist in troubleshooting.

Releases

The primary source of CyberArk Secrets Provider for Kubernetes releases is our Dockerhub.

When we release a version, we push the following images to Dockerhub:

  1. Latest
  2. Major.Minor.Build
  3. Major.Minor
  4. Major

We also push the Major.Minor.Build image to our Red Hat registry.

Builds

We push the following tags to Dockerhub:

Edge - on every successful main build an edge tag is pushed (cyberark/secrets-provider-for-k8s:edge).

Latest - on every release the latest tag will be updated (cyberark/secrets-provider-for-k8s:latest). This tag means the Secrets Provider for Kubernetes meets the stability criteria detailed in the following section.

Semver - on every release a Semver tag will be pushed (cyberark/secrets-provider-for-k8s:1.1.0). This tag means the Secrets Provider for Kubernetes meets the stability criteria detailed in the following section.

Stable release definition

The CyberArk Secrets Provider for Kubernetes is considered stable when it meets the core acceptance criteria:

  • Documentation exists that clearly explains how to set up and use the provider and includes troubleshooting information to resolve common issues.
  • A suite of tests exist that provides excellent code coverage and possible use cases.
  • The CyberArk Secrets Provider for Kubernetes has had a security review and all known high and critical issues have been addressed. Any low or medium issues that have not been addressed have been logged in the GitHub issue backlog with a label of the form security/X
  • The CyberArk Secrets Provider for Kubernetes is easy to setup.
  • The CyberArk Secrets Provider for Kubernetes is clear about known limitations and bugs, if they exist.

Development

We welcome contributions of all kinds to CyberArk Secrets Provider for Kubernetes. For instructions on how to get started and descriptions of our development workflows, see our contributing guide.

Documentation

You can find official documentation on our site.

Community

Interested in checking out more of our open source projects? See our open source repository!

License

The CyberArk Secrets Provider for Kubernetes is licensed under the Apache License 2.0 - see LICENSE for more details.

secrets-provider-for-k8s's People

Contributors

aacastillo avatar abrahamko avatar alexkalish avatar andytinkham avatar bradleyboutcher avatar codihuston avatar diverdane avatar doodlesbykumbi avatar eladkug avatar eranha avatar garymoon avatar gl-johnson avatar hughsaunders avatar imheresamir avatar ismarc avatar john-odonnell avatar jtuttle avatar juniortaeza avatar mfelgate avatar micahlee avatar neil-k-zero avatar nessilahav avatar orenbm avatar rpothier avatar sgnn7 avatar sigalsax avatar szh avatar tovli avatar tzheleznyak avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

secrets-provider-for-k8s's Issues

Create automation for DAP in GKE

We want to run the same automation we have in k8s secrets repo for openshift to work with GKE for DAP

DoD:

  • all tests are passing with GKE and DAP

refactor tests

(Following Srdjan review: #21 (comment))

The test cases are many and seem to be invoking similar steps. I wonder the testing might benefit from moving to a programming language like Go or coming up with an abstraction for the test case that allows test case specification to be declarative ?

Secrets provider publishes a Red Hat certified image to the RH container registry

Secretless and the authn-k8s client are both already publishing to Red Hat.

This is connected to the Aha card: https://cyberark.aha.io/features/AAM-298

AC:

  • Work with infrastructure to create a new project in our Red Hat account for the secrets provider image (this includes getting the project-specific API key)
  • Update the project pipeline to authenticate to the RH container registry using the project-specific API key and push the updated image on publish to RH as well as dockerhub

Developer notes:

  • There are (private) guidelines here for adding a project to the RH registry that you may find useful
  • To see how Secretless added this support, you can see PR 1141 and PR 1149 - unfortunately, it was done in two steps since the per-project API key wasn't clear until publish time.

Jenkins has cucumber report

At the end of this job we will have cucumber report under Jenkins, in case of failure we will get message in slack for specific tag.

DOD:

  • Verify successful job create cucumber report
  • Run successful test
  • Run failure test
  • Verify on fail that it send to specific tag in slack as well (-owner)

Add retries to CI OC operations

Various CI builds have failed with errors similar to:

[2020-03-03T21:44:16.750Z] + oc delete rolebinding secrets-access-role-binding --ignore-not-found=true

[2020-03-03T21:44:17.008Z] Unable to connect to the server: dial tcp 54.197.37.237:8443: connect: no route to host
[2020-03-02T23:16:26.179Z] + oc delete configmap conjur-master-ca-env --ignore-not-found=true

[2020-03-02T23:16:26.439Z] Unable to connect to the server: dial tcp 54.197.37.237:8443: connect: no route to host

The actual operation that fails varies between builds.
Operations before succeeded, this is probably due to Openshift load or slow response, possibly because the test clusters are mostly backed by a single node.

Further proof that this error is intermittent is shown by oc operations that are already retried. In this example the loop runs 6 times, but only fails once on dial tcp.

[2020-03-02T19:38:13.818Z] Waiting for 'oc get pods --namespace=test-app-5-94a10631-0 --selector app=test-env --no-headers | wc -l | tr -d ' ' | grep '^0$'' up to 600 s

[2020-03-02T19:38:13.818Z] ++ seq 300

[2020-03-02T19:38:13.818Z] + for i in $(seq $times_to_run)

[2020-03-02T19:38:13.819Z] + eval oc get pods --namespace=test-app-5-94a10631-0 --selector app=test-env --no-headers '|' wc -l '|' tr -d \' \' '|' grep ''\''^0$'\'''

[2020-03-02T19:38:13.819Z] ++ oc get pods --namespace=test-app-5-94a10631-0 --selector app=test-env --no-headers

[2020-03-02T19:38:13.819Z] ++ wc -l

[2020-03-02T19:38:13.819Z] ++ tr -d ' '

[2020-03-02T19:38:13.819Z] ++ grep '^0$'

[2020-03-02T19:38:14.185Z] .+ echo -n .

[2020-03-02T19:38:14.185Z] + sleep 2

[2020-03-02T19:38:16.346Z] + for i in $(seq $times_to_run)

[2020-03-02T19:38:16.346Z] + eval oc get pods --namespace=test-app-5-94a10631-0 --selector app=test-env --no-headers '|' wc -l '|' tr -d \' \' '|' grep ''\''^0$'\'''

[2020-03-02T19:38:16.346Z] ++ wc -l

[2020-03-02T19:38:16.346Z] ++ tr -d ' '

[2020-03-02T19:38:16.346Z] ++ oc get pods --namespace=test-app-5-94a10631-0 --selector app=test-env --no-headers

[2020-03-02T19:38:16.346Z] ++ grep '^0$'

[2020-03-02T19:38:16.346Z] .+ echo -n .

[2020-03-02T19:38:16.346Z] + sleep 2

[2020-03-02T19:38:18.248Z] + for i in $(seq $times_to_run)

[2020-03-02T19:38:18.248Z] + eval oc get pods --namespace=test-app-5-94a10631-0 --selector app=test-env --no-headers '|' wc -l '|' tr -d \' \' '|' grep ''\''^0$'\'''

[2020-03-02T19:38:18.248Z] ++ grep '^0$'

[2020-03-02T19:38:18.248Z] ++ tr -d ' '

[2020-03-02T19:38:18.248Z] ++ wc -l

[2020-03-02T19:38:18.248Z] ++ oc get pods --namespace=test-app-5-94a10631-0 --selector app=test-env --no-headers

[2020-03-02T19:38:18.817Z] + echo -n .

[2020-03-02T19:38:18.817Z] .+ sleep 2

[2020-03-02T19:38:20.728Z] + for i in $(seq $times_to_run)

[2020-03-02T19:38:20.728Z] + eval oc get pods --namespace=test-app-5-94a10631-0 --selector app=test-env --no-headers '|' wc -l '|' tr -d \' \' '|' grep ''\''^0$'\'''

[2020-03-02T19:38:20.728Z] ++ oc get pods --namespace=test-app-5-94a10631-0 --selector app=test-env --no-headers

[2020-03-02T19:38:20.728Z] ++ wc -l

[2020-03-02T19:38:20.728Z] ++ tr -d ' '

[2020-03-02T19:38:20.728Z] ++ grep '^0$'

[2020-03-02T19:38:20.990Z] No resources found.

[2020-03-02T19:38:20.990Z] Unable to connect to the server: dial tcp 54.197.37.237:8443: connect: no route to host

[2020-03-02T19:38:20.990Z] .+ echo -n .

[2020-03-02T19:38:20.990Z] + sleep 2

[2020-03-02T19:38:23.042Z] + for i in $(seq $times_to_run)

[2020-03-02T19:38:23.042Z] + eval oc get pods --namespace=test-app-5-94a10631-0 --selector app=test-env --no-headers '|' wc -l '|' tr -d \' \' '|' grep ''\''^0$'\'''

[2020-03-02T19:38:23.042Z] ++ oc get pods --namespace=test-app-5-94a10631-0 --selector app=test-env --no-headers

[2020-03-02T19:38:23.042Z] ++ wc -l

[2020-03-02T19:38:23.042Z] ++ tr -d ' '

[2020-03-02T19:38:23.042Z] ++ grep '^0$'

[2020-03-02T19:38:23.300Z] .+ echo -n .

[2020-03-02T19:38:23.301Z] + sleep 2

[2020-03-02T19:38:25.835Z] + for i in $(seq $times_to_run)

[2020-03-02T19:38:25.835Z] + eval oc get pods --namespace=test-app-5-94a10631-0 --selector app=test-env --no-headers '|' wc -l '|' tr -d \' \' '|' grep ''\''^0$'\'''

[2020-03-02T19:38:25.835Z] ++ oc get pods --namespace=test-app-5-94a10631-0 --selector app=test-env --no-headers

[2020-03-02T19:38:25.835Z] ++ wc -l

[2020-03-02T19:38:25.835Z] ++ grep '^0$'

[2020-03-02T19:38:25.835Z] ++ tr -d ' '

[2020-03-02T19:38:25.835Z] No resources found.

[2020-03-02T19:38:25.835Z] Success!

[2020-03-02T19:38:25.835Z] + echo 'Success!'

[2020-03-02T19:38:25.835Z] + return 0

Investigate ways to increase reliability:

  • Use the bl_retry_constant function from bash-lib to retry all oc operations. Possibly define an ocr function that passes all options through to oc with a few retries, and replace all uses of oc with ocr.
  • Ignore failures on deletions. This is ugly and would leave mess behind, but it wouldn't be catestrophic as ci clusters are now cleared on schedule. This also doesn't solve the problem for non-delete oc actions.
  • Retry whole stages/scripts? IMO this would be less likely to work as scripts are usually not idempotent.
  • Other strategy?

Automation failed - Manifests use older deprecated API versions

Secrets provider failed on GKE environment because the manifests use older deprecated API versions.
Our deploy contain changes that need to rebase to deploy-oss-tag as well 120.
In addition to that we need to update manifest to use v1 instead of v1beta1 under conjur-cluster.yaml.

Steps to Reproduce:
Run secrets provider for k8s on gke environment

Expected Results:
Build Pass

Actual Results:
Failed and you see the following errors:

error: unable to recognize "STDIN": no matches for kind "Deployment" in version "apps/v1beta1"

DOD:

  • secrets-provdier-for-k8s is working

Runs tests with GKE

At this point, we have tests only with OpenShift 3.11. We should add automation for GKE and preferably also for OC 3.9 and OC 3.10.

These tests run in kubernetes-conjur-demo so we can use that for reference.

Add tool to autogenerate NOTICES.txt to pipeline

For our releases we are able to use a tool for autogenerating our NOTICES.txt

We would like to add this as part of our pipeline so it will make for an easier Release process.

Blocked
At current, this tool is not completely ready because there is still a minimal manual task that needs to take place and that is double-checking the work of the tool (ensuring all licenses are accounted for from go.mod).

Awaiting
Getting full list of limitations from @izgeri

Find/Replace old repo naming w/ new

We need to go through the repo and other linked repos to ensure nothing was broken by the name change

In other words:
CyberArk-secret-provider-for-K8s -> Kubernetes-secret-provider-for-k8s

Add code coverage results in Jenkins

We want to verify our code coverage in the project and get the output results for each build.

  • Connect code climate to this project

  • Add support via Jenkins to our automation

Refactor automation to work with HELM

We need to introduce HELM into our technical skills. We want to refactor the k8s secrets integration tests to deploy using HELM so it will be easier to manage

  • create design
  • implement HELM simple deployment

Run integration tests on Conjur OSS

We should run integration tests of kubernetes-conjur-demo (which runs the authn-client & secretless) and of cyberark-secrets-provider-for-k8s also on Conjur OSS

Things to do:

  • Move changes in the consumers into conjur-oss-openshift-deploy
    • We should make this repo as close as possible to kubernetes-conjur-deploy so the consumers will have slight changes
  • Move conjur-oss-openshift-deploy files into kubernetes-conjur-deploy
  • Add flag to cyberark-secrets-provider-for-k8s for testing with OSS
  • Add step in Jenkinfile to run the integration tests with OSS

  • Refactor kubernetes-conjur-deploy to re-use yaml & script files
  • Add support for GKE

Add tests to verify our logs

We have great logs but we need to verify that they are working with the right flow, otherwise it may confuse us to see some log appear.
This document describe all the logs we have and if they have tests or not (acceptance, unit test). I added priorities (High, Medium, Low) so we can handle this better.

We can test our logs by one of those actions :

  1. existing tests
  2. Create new acceptance test
  3. Create new unit test

DOD:

  • Handle all the high priority issues

  • Handle all the medium priority issues

  • Handle all the low priority issues

Support for complex secrets

Hello,

cyberark/secrets-provider-for-k8s seems to have issues understanding complex secrets such as ssh keys or json documents.

Some details on the environment I'm using :

  • OpenShift 4.3.8
  • Conjur OSS 1.5.0
  • cyberark/secrets-provider-for-k8s latest (sha256:e207dfe8b80425d1f1acfe84001f3418e4c69b82c401d5ec601dee03df511e5b)

Setup the required policies to be able to use the authenticator kubernetes-authenticator-client in the namespace argocd.

A variable someenv/hf333ocp/artifactory-pull-secret/dockerconfigjson is set ->

root@conjur-configure-pwz4hziqh9-jbwnp:/# conjur variable values add 'someenv/hf333ocp/artifactory-pull-secret/dockerconfigjson' 'some_simple_secret_value'
Value added
root@conjur-configure-pwz4hziqh9-jbwnp:/# conjur variable value someenv/hf333ocp/artifactory-pull-secret/dockerconfigjson
some_simple_secret_value

Using this test secret as starting state of the secret, and this simple job.

---
kind: Secret
apiVersion: v1
metadata:
  name: test-credentials
type: Opaque
stringData:
  conjur-map: |-
    .dockerconfigjson: |-
      someenv/hf333ocp/artifactory-pull-secret/dockerconfigjson
---
apiVersion: batch/v1
kind: Job
metadata:
  name: conjur-test-provider-1
spec:
  activeDeadlineSeconds: 6000
  template:
    spec:
      serviceAccountName: kubernetes-authenticator-client
      containers:
        - image: 'cyberark/secrets-provider-for-k8s'
          imagePullPolicy: IfNotPresent
          name: kubernetes-authenticator-client
          env:
            - name: DEBUG
              value: 'true'
            - name: CONTAINER_MODE
              value: init
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.name
            - name: MY_POD_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
            - name: MY_POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: CONJUR_VERSION
              value: '5'
            - name: CONJUR_APPLIANCE_URL
              value: "someOCPRoute"
            - name: CONJUR_AUTHN_URL
              value: "https://cs-conjur-1-conjur-oss.conjur.svc.cluster.local/authn-k8s/kubernetes_authenticator"
            - name: CONJUR_ACCOUNT
              value: 'default'
            - name: CONJUR_AUTHN_LOGIN
              value: "host/conjur/authn-k8s/kubernetes_authenticator/apps/argocd/*/*"
            - name: CONJUR_SSL_CERTIFICATE
              valueFrom:
                secretKeyRef:
                  name: conjur-cert
                  key: tls.crt
            - name: K8S_SECRETS
              value: test-credentials
            - name: SECRETS_DESTINATION
              value: k8s_secrets
      restartPolicy: Never
---	  

In that case, simple password works and the K8 secret is correctly updated with the secret value from Conjur

DEBUG: 2020/03/27 12:55:36 main.go:121: CSPFK001D Debug mode is enabled
INFO: 2020/03/27 12:55:36 main.go:65: CSPFK001I Authenticating as user '&{host/conjur/authn-k8s/kubernetes_authenticator/apps/argocd/*/* host.conjur.authn-k8s.kubernetes_authenticator.apps argocd.*.*}'
INFO: 2020/03/27 12:55:36 authenticator.go:181: CAKC005I Trying to login Conjur...
INFO: 2020/03/27 12:55:36 authenticator.go:113: CAKC007I Logging in as user &{host/conjur/authn-k8s/kubernetes_authenticator/apps/argocd/*/* host.conjur.authn-k8s.kubernetes_authenticator.apps argocd.*.*}.
INFO: 2020/03/27 12:55:36 requests.go:23: CAKC011I Login request to: https://cs-conjur-1-conjur-oss.conjur.svc.cluster.local/authn-k8s/kubernetes_authenticator/inject_client_cert
INFO: 2020/03/27 12:55:36 authenticator.go:187: CAKC002I Logged in
INFO: 2020/03/27 12:55:36 authenticator.go:170: CAKC008I Cert expires: 2020-03-30 12:55:36 +0000 UTC
INFO: 2020/03/27 12:55:36 authenticator.go:171: CAKC009I Current date: 2020-03-27 12:55:36.874151925 +0000 UTC
INFO: 2020/03/27 12:55:36 authenticator.go:172: CAKC010I Buffer time:  30s
INFO: 2020/03/27 12:55:36 requests.go:47: CAKC012I Authn request to: https://cs-conjur-1-conjur-oss.conjur.svc.cluster.local/authn-k8s/kubernetes_authenticator/default/host%2Fconjur%2Fauthn-k8s%2Fkubernetes_authenticator%2Fapps%2Fargocd%2F%2A%2F%2A/authenticate
INFO: 2020/03/27 12:55:36 authenticator.go:250: CAKC001I Successfully authenticated
INFO: 2020/03/27 12:55:36 k8s_secrets_client.go:53: CSPFK004I Creating Kubernetes client...
INFO: 2020/03/27 12:55:36 k8s_secrets_client.go:22: CSPFK005I Retrieving Kubernetes secret 'test-credentials' from namespace 'argocd'...
DEBUG: 2020/03/27 12:55:36 provide_conjur_secrets.go:120: CSPFK009D Processing 'conjur-map' data entry value of k8s secret 'test-credentials'
INFO: 2020/03/27 12:55:36 conjur_secrets_retriever.go:11: CSPFK003I Retrieving following secrets from Conjur: [someenv/hf333ocp/artifactory-pull-secret/dockerconfigjson]
INFO: 2020/03/27 12:55:36 conjur_client.go:21: CSPFK002I Creating Conjur client...
INFO: 2020/03/27 12:55:36 k8s_secrets_client.go:53: CSPFK004I Creating Kubernetes client...
INFO: 2020/03/27 12:55:36 k8s_secrets_client.go:40: CSPFK006I Patching Kubernetes secret 'test-credentials' in namespace 'argocd'
kind: Secret
apiVersion: v1
metadata:
  name: test-credentials
  namespace: argocd
  selfLink: /api/v1/namespaces/argocd/secrets/test-credentials
  uid: 0dfb8da0-c58c-4105-891b-c3e79d02b0b2
  resourceVersion: '50487994'
  creationTimestamp: '2020-03-27T06:56:23Z'
data:
  .dockerconfigjson: c29tZV9zaW1wbGVfc2VjcmV0X3ZhbHVl
  conjur-map: >-
    LmRvY2tlcmNvbmZpZ2pzb246IHwtCiAgc29tZWVudi9oZjMzM29jcC9hcnRpZmFjdG9yeS1wdWxsLXNlY3JldC9kb2NrZXJjb25maWdqc29u
type: Opaque

However, if I set the secret value with a more complex json document ->

root@conjur-configure-936gnaui13-zwdt2:/# conjur variable value someenv/hf333ocp/artifactory-pull-secret/dockerconfigjson
{"auths":{"someurl":{"auth":"sometoken="}}}

In that case cyberark/secrets-provider-for-k8s fails with the following parsing error. Note that the same kind of parsing error occurs for an ssh key (but this time complaining about a \r character).

DEBUG: 2020/03/27 13:11:08 main.go:121: CSPFK001D Debug mode is enabled
INFO: 2020/03/27 13:11:08 main.go:65: CSPFK001I Authenticating as user '&{host/conjur/authn-k8s/kubernetes_authenticator/apps/argocd/*/* host.conjur.authn-k8s.kubernetes_authenticator.apps argocd.*.*}'
INFO: 2020/03/27 13:11:08 authenticator.go:181: CAKC005I Trying to login Conjur...
INFO: 2020/03/27 13:11:08 authenticator.go:113: CAKC007I Logging in as user &{host/conjur/authn-k8s/kubernetes_authenticator/apps/argocd/*/* host.conjur.authn-k8s.kubernetes_authenticator.apps argocd.*.*}.
INFO: 2020/03/27 13:11:08 requests.go:23: CAKC011I Login request to: https://cs-conjur-1-conjur-oss.conjur.svc.cluster.local/authn-k8s/kubernetes_authenticator/inject_client_cert
INFO: 2020/03/27 13:11:08 authenticator.go:187: CAKC002I Logged in
INFO: 2020/03/27 13:11:08 authenticator.go:170: CAKC008I Cert expires: 2020-03-30 13:11:08 +0000 UTC
INFO: 2020/03/27 13:11:08 authenticator.go:171: CAKC009I Current date: 2020-03-27 13:11:08.233864142 +0000 UTC
INFO: 2020/03/27 13:11:08 authenticator.go:172: CAKC010I Buffer time:  30s
INFO: 2020/03/27 13:11:08 requests.go:47: CAKC012I Authn request to: https://cs-conjur-1-conjur-oss.conjur.svc.cluster.local/authn-k8s/kubernetes_authenticator/default/host%2Fconjur%2Fauthn-k8s%2Fkubernetes_authenticator%2Fapps%2Fargocd%2F%2A%2F%2A/authenticate
INFO: 2020/03/27 13:11:08 authenticator.go:250: CAKC001I Successfully authenticated
INFO: 2020/03/27 13:11:08 k8s_secrets_client.go:53: CSPFK004I Creating Kubernetes client...
INFO: 2020/03/27 13:11:08 k8s_secrets_client.go:22: CSPFK005I Retrieving Kubernetes secret 'test-credentials' from namespace 'argocd'...
DEBUG: 2020/03/27 13:11:08 provide_conjur_secrets.go:120: CSPFK009D Processing 'conjur-map' data entry value of k8s secret 'test-credentials'
INFO: 2020/03/27 13:11:08 conjur_secrets_retriever.go:11: CSPFK003I Retrieving following secrets from Conjur: [someenv/hf333ocp/artifactory-pull-secret/dockerconfigjson]
INFO: 2020/03/27 13:11:08 conjur_client.go:21: CSPFK002I Creating Conjur client...
INFO: 2020/03/27 13:11:08 k8s_secrets_client.go:53: CSPFK004I Creating Kubernetes client...
INFO: 2020/03/27 13:11:08 k8s_secrets_client.go:40: CSPFK006I Patching Kubernetes secret 'test-credentials' in namespace 'argocd'
DEBUG: 2020/03/27 13:11:08 provide_conjur_secrets.go:155: CSPFK005D Failed to patch k8s secret. Reason: invalid character 'a' after object key:value pair
ERROR: 2020/03/27 13:11:08 provide_conjur_secrets.go:156: CSPFK022E Failed to patch k8s secret
ERROR: 2020/03/27 13:11:08 provide_conjur_secrets.go:82: CSPFK023E Failed to patch K8s secrets
ERROR: 2020/03/27 13:11:08 main.go:78: CSPFK016E Failed to provide Conjur secrets

Describe the solution you would like

cyberark/secrets-provider-for-k8s should support retreiving secrets as complex as it is possible to add in Conjur.

Describe alternatives you have considered

Encoding the json document or ssh key in base64 prior storing to Conjur removes the issue however it removes the main added value of cyberark/secrets-provider-for-k8s as in that case the workload using the secret must process it to make it a standard ssh key or document, which is not always possible when you don't own the binaries of that workload (what I see as the main use case of cyberark/secrets-provider-for-k8s).

Verify that the secrets-provider doesn't require CONJUR_VERSION

We are now consuming a newer version of conjur-authn-k8s-client that doesn't require the CONJUR_VERSION env var. We should test that we also don't need it.

DoD:

  • Tests are not inserting the CONJUR_VERSION variable to validate that it's not required and is defaulted to version 5

Red Hat certified image to available in registry

The work on our side to provide a RH-certified image has been completed. With this, we still have some outstanding tasks to finish once we get approval on our image from RH.

PR: #93

TODO

  • RH image has been approved by RH and we have been given approval from @boazmichaely to continue
  • Manually build, push image, and publish to be forward facing (from point of merge)
  • Add the following to CONTRIBUTING & README docs
  • Add to forward facing documentation

Add the following to Contributing.md

Publish the git release

  1. In the GitHub UI, create a release from the new tag and copy the change log
    for the new version into the GitHub release description.
  2. The Jenkins pipeline auto-publishes new images to DockerHub, but to publish the Red Hat certified image you will need to visit its management page and manually publish the image.

Publish the Red Hat image

  1. Visit the Red Hat project page once the images have been pushed and manually choose to publish the latest release.

Readme.md (under Releases)

We also push the Major.Minor.Build image to our Red Hat registry.

Secrets Provider (P2) - Research, Authenticate with Conjur/DAP

A valid solution requires that the Secrets provider be decoupled from customer applications (AKA not sitting with each app in same pod).

Therefore, we will need to think of a solution to meet this demand and still authenticate with Conjur/DAP without sacrificing the granularities.

Currently have 5 application identity granularities

  1. Namespace (Default)
  2. Service Account
  3. Deployment / Deployment Config (Openshift)
  4. Stateful set
  5. Pod

As per our docs, it is crucial that we preserve the integrity of Deployment / Deployment Config (Openshift)

We recommend using this option as a transition, and move towards using the Kubernetes Deployment and StatefulSet resources as hosts.

Solution doc: https://github.com/cyberark/secrets-provider-for-k8s/pull/122/files

Alert on failed builds

Regular builds always sent to #Jenkins with no tags
If the build run on master and the status is failed/unstable we will get it to default channel which Is development with no tags.
Need to make sure that when the test failed on master we will get message on slack for specific group.
Create a new tag in Slack named “-owners” (“secrets-provider-for-k8s-owners”)

Send the notification to the general channel with the “@-owners” tag.
DOD:

  • Add new tag "secrets-provider-for-k8s-owners"
  • Tests go to channel #jenkins anyway
  • Testing scenario- if failed get the relevant message on #Jenkins with relevant tag (Master will be validate only after merging)
  • Testing scenario- if succeed sending only to #jenkins without tag

Rename repository to secrets-provider-for-k8s

Rename repository to secrets-provider-for-k8s
Basically removing the word 'cyberark' from the start of the repo name

Send a notification on slack and make sure scripts are updated as well (if needed)

Create automation for OSS in GKE

We want to run the same automation we have in k8s secrets repo for openshift to work with GKE for conjur OSS

DoD:

  • all tests are passing with GKE and OSS

Add automation testing for Openshift 3.9

A previous PR has been opened that covers automation for GKE, Openshift 3.11 and 3.10 (#46) for the Secrets Provider for K8s. This is a separate card to support 3.9. At current, tests run manually and this effort is to build automaticity into our pipeline for 3.9

Open questions to consider:

Do we want to support 3.9? Are our customers on 3.9 still? This is a decision that needs to be had between the team and PM
Why do we get a different response from 3.11/3.10 and 3.9. Maybe there is something in our scripts that is not supported in 3.9 - deleting deployments, pods, etc. See authn-client and demo

Verify our missing steps to make this repo stable with more coverage

We want to improve this repo and make it better from several perspective :

  1. Infra environment for testing
  2. Add more tests
  3. Working with oss in better way (today we are working with specific tag in deploy repo)
  4. Make it more stability

Dod

  • Create confluence page which declare what we are missing in this repo to make it better.

  • Declare priority for each task

Test automation for DAP & OSS for GKE/K8s design

The following is a design in order run OSS/DAP testing for Secrets provider for K8s in GKE. This design will be changing with the progression of the task and definitely open for feedback and insights

  1. Using the oss-by-default-oss-by-default tag from the kubernetes-conjur-deploy repo and use that tagged commit to run the tests
  2. I will be using the k8s-conjur-demo as an example of how to configure the repo to extend configuration to GKE
  3. Add proper GKE configurations in order to deploy DAP/OSS in GKE
  4. Roles and rolebindings apiVerisons will be updated according to what is supported GKE (from v1 -> rbac.authorization.k8s.io/v1 for example)
  5. Configure scripts so that according to the flag received (—dap/oss), the proper test environment will deployed on GKE.

Current obstacles
Problem: During the Configuring Master pod stage, the container is terminated with a code 137. I am not sure what changed between the deployment from yesterday and today except for the reverting of the commit
Solution: In response to this error:

  • I plan on instead of taking the oss-by-default-oss-by-default tag, I will try master
  • Check logs to see where the failure took place
  • Post in slack, asking if something was changed in k8s-conjur-deploy

Standardised CHANGELOG exists, and is validated via pipeline

If the repo has a changelog that doesn't meet the standard, do try to change earlier entries to match the standard.
If the repo doesn't have a changelog use this as a starter:

# Changelog
All notable changes to this project will be documented in this file.

The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/)
and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html).

## [Unreleased]

Acceptance criteria

cli_get_pods_test_env need to work with retry

The function cli_get_pods_test_env under utils.sh get response "No resources found" as result and continue instead of run it with retry.
cli_get_pods_test_env () { $cli_with_timeout "get pods --namespace=$TEST_APP_NAMESPACE_NAME --selector app=test-env --no-headers" }

Need to add some grep which is waiting to response different than "No resources found".

Tip:
In test_case_teardown we are waiting for No resources found (terminating) and it work well with retry because there is specific grep for this:
$cli_with_timeout "get pods --namespace=$TEST_APP_NAMESPACE_NAME --selector app=test-env --no-headers | wc -l | tr -d ' ' | grep '^0$'"

DOD:

  • Add fix to automation

Secrets Provider support Rotations and deployed separately - Research

Overview

We would like to improve the secret provider scope and support
Milestone 1

  • Secrets Provider runs and deployed without affecting the app deployment or app life cycle
  • Secrets Provider can serve many apps
    Milestone 2
  • Secrets Provider support rotations

Use cases

Milestone 1

As a operation team, Liz
I would like to deploy Secret provider in a way that do not interfere with my application deployment
So That my app can be upgraded and managed in separation from secret provider

AS an k8s app developer, David
I would like to fetch my secret in an easy and naive way without changing my code
So That my app will get the most updated secret as a K8s secret (usually provided in the deployment as an env var or file).
The secrets are expected to be provided in advance before the app is initialised.

If the secrets change value please look at :How to handle rotation? section below

Milestone 2

AS an k8s app developer, David
I would like to fetch my secret in an easy and naive way without changing my code
So That my app will get the most updated secret as a K8s secret (usually provided in the deployment as an env var or file).
The secrets are expected to be provided in advance before the app is initialized.

If the secrets change value please look at :How to handle rotation? section below

As an info sec can be Vault admin or Conjur admin or DAP admin
I would like to rotate the secret value in Vault or in DAP manually or using CPM policies
So That I'll stay secure and follow security best practices

As an info sec can be Vault admin or Conjur admin or DAP admin
I would like to change the secret value
So That I'll stay secure and follow security best practices

Requirements for Milestone 1

In this phase Secrets provider needs to be able to run as a separate entity and serve multiple application containers that runs on multiple pods.

Deployment requirements

  • Secrets provider needs to be able to run as a separate entity and serve multiple application containers that runs on multiple pods.
  • The lifecycle of this deployment option should support also upgrade and removal of deployment
  • Provide a way for our customer to understand the state of the secret provider - when it finished initializing
  • The deployment method of the secret provider should be native to K8s. Deployment and uninstall should be done using helm

Deployment Flow with helm chart
Prerequisite

  1. Configure K8s Secrets with conjur-map metadata defined 
    Optional: add label to K8s Secret (Milestone 2)

  2. Create custom-values.yaml file that will optionally contain the following params:

2.1 Service Account Name (default: secret-provider-account)
2.2 Role Name(default: secret-provider-role)
2.3 Role Binding Name(default: secret-provider-role-binding)
2.4 Follower Connection Timeout (default: 10 seconds) may stay internal ??
2.5 Retries count (default: 2) secret provider will wait "Follower Connection Timeout" amount of time to authenticate + get secrets from the follower, and if timeout is reached and we will retry according to "Retries count".
2.6 Sync Interval Time (default: 5 minutes) (Milestone 2) 
2.7 If label filtering is defined: K8s Secret Label parameter should be added to custom-values.yaml (Milestone 2)
When the parameter does not appear in file - default value for it is used
2.8 Rotation support - Enabled/ Disabled (Milestone 2) 

  1. Install helm chart for secret provider (helm chart should create service Account, Role / ClusterRole and RoleBinding / ClusterRoleBinding, as well as Job/Deployment that will run)
    Installation will print a message including data about deployment mode (if it is Job or Deployment)

Requirements for Milestone 2 - rotation

How to handle rotation?

It is the secret provider responsibility to update the K8s secret after rotation yet it is the application developer responsibility to make sure the application is reading the updated value.

Configuration requirements

  • The Secrets Provider needs to support rotation and update secrets once changed one way to do it is based on a time interval.
  • The time interval should be configurable using a config named "SECRETS_UPDATE_INTERVAL" and with a default that is every 5 minutes (to match the Synchronizer defaults). Ideally, this number should be as small as possible.
  • Having a recommendation for a minimum time interval is important.
  • The configuration should be in seconds and it should have a minimum recommendation.
  • The ability to support rotation should be configurable as well

Performance requirements

Please test and document how many secrets can be updated in 5 minutes in average where a secret should be either extreme long password or one vault account which is 5 vars username address port password dns
Have a test that make sure we are meeting this SLA once it is determined
Number of secret providers per follower is expected to be 1000.

###Requirements for both Milestones

Logs & Supportability -

  • Make sure each interval is written down and it is easy to see what had happen in each interval
  • Monitoring
    - We need to provide a way to understand the status of the Secret provider (health)
  • Supportability can be tricky here as we now serve many applications please document as much as possible information that can help in supportability.
    Usually the status of pod / container in K&S is provided by probes together with kubectl / oc commands.

Audit

  • If in the init container solution we have authenticated ourselves as the application (sitting in the same pod) now we are serving more apps and pods. The audit would be hard to read. Think about this and see if A. There is a problem B. How may we solve it.

Innovative

  • If possible please write a patent about this (smile)
  • If not one can suggest new ways to authenticate in k8s in new ways. One can think of other ways to integrate with K8s Secrets

Test Env
Please test with followers inside and outside (deployed on VM with LB in front of it) the cluster.

DOD

  • The process logic of this feature should be written by PO and tech lead hand by hand
  • Research page with all info regarding Secrets Provider support Rotations and deployed separately
  • Test plan written and reviewed by PO & QAA
  • Security review was done and issues were raised
  • Research results presented and shared
  • High level effort estimations and risks

Fix demo

While improving out tests, we broke our demo. We should fix this so it is easier to demo the secrets-provider.

Also, it is best no to use ./demo/pet-store-env.sh.yml | $cli create -f - and create a generated yml file instead. This is better for the demo as we can load the YAML file that is created by ourselves.

Version is printed in startup of the program

We would like to give customers the ability to know in production which version of the secrets-provider they are using.

In this PR we added this ability to the conjur-authn-k8s-client so that's a good reference.

DoD:

  • secrets-provider-for-k8s prints its version upon startup

Adding timeout to pipeline

The job was stuck once while it got error from oc :
error: error sending request: Post https://openshift-311.itci.conjur.net:8443/api/v1/namespaces/conjur-172b7bc8-b-test/pods/conjur-cluster-694f548678-ncvqf/exec?command=evoke&command=unpack&command=seed&command=%2Ftmp%2Fstandby-seed.tar&container=conjur-appliance&container=conjur-appliance&stderr=true&stdout=true: dial tcp 54.197.37.237:8443: connect: no route to host
There is a different ticket which handle deploy repo problems.

DOD:

  • Verify all is working after the fix

RH image - Create image and update scripts

DOD

  • Update image (not base image, but build on top of it)
  • Ensure image works as expected (check by pushing to Jenkins to trigger a build)
  • Update project publish.sh to also push image to RH repo

Helpful resources:

PR: #93

Add project to BlackDuck

DoD

  • project was added to BlackDuck and scanned
  • Product’s Acknowledgment File is updated in Confluence
  • add a notices file to the github repo? Consult with @izgeri

Fix nightly automation

Our nightly is not stable and need to fix the problems. It looks like the problem is with the environment and not with the tests since it failed during building namespace on os.

DOD

  • Run automation 10 times without failures

Send slack notification on master failure only

We get notification for any failure on secrets-provider-for-k8s-owners to our group.
We want to get failure notification from master only.

Dod:

  • Verify failure sent to group only from master

  • Run test from master and from PR (since its jenkinsfile we can do the test from master as well)

Test solution with Conjur 11.1+ (for K8s + Openshift)

We need to have an automated build that test the provider integration with Conjur 11.1 and push an image to docker hub.

Make sure the image doesn't contain build related files and is small with the minimum layers we can have.

Make sure to set the proper image in the documentation as well

BREAKING (Openshift)

  • OSS runs in Openshift
  • OSS runs end-to-end in Openshift with the Secrets Provider for K8s (locally) and test #1-10 pass (vanilla)
  • Andrew repo has been updated to be similar to k8s conjur deploy repo by removing hardcoded variables and given appropriate naming
  • Secrets-provider-for-k8s variables have been updated with appropriate variables and hard coded secrets have been removed
  • OSS vanilla flow is automated in Jenkins
  • Moti tests have passed with OSS (BONUS)
  • Moti tests have automaticity (BONUS)
  • Andrew repo has been reviewed for sensitive info and quality
  • Andrew repo has been moved/forked into our cyberark organization (moved to k8s-conjur-deploy)

BREAKING (K8S)

  • OSS runs with K8s using GKE

Image name was changed

The image name was changed from
cyberark/cyberark-secrets-provider-for-k8s to cyberark/secrets-provider-for-k8s

Need to check with Inbal and change the image name if a revert is needed or change the Docs if the image name is ok

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.