Code Monkey home page Code Monkey logo

kubernetes-external-secrets's Introduction

Deprecated

This project has been deprecated. Please take a look at ESO (External Secrets Operator) instead https://github.com/external-secrets/external-secrets

Artifact HUB LGTM Alerts

History

This project was moved from the GoDaddy to the external-secrets GitHub organization in an effort to consolidate different projects with the same objective. More information here.

Kubernetes External Secrets

Kubernetes External Secrets allows you to use external secret management systems, like AWS Secrets Manager or HashiCorp Vault, to securely add secrets in Kubernetes. Read more about the design and motivation for Kubernetes External Secrets on the GoDaddy Engineering Blog.

The community and maintainers of this project and related Kubernetes secret management projects use the #external-secrets channel on the Kubernetes slack for discussion and brainstorming.

How it works

The project extends the Kubernetes API by adding an ExternalSecrets object using Custom Resource Definition and a controller to implement the behavior of the object itself.

An ExternalSecret declares how to fetch the secret data, while the controller converts all ExternalSecrets to Secrets. The conversion is completely transparent to Pods that can access Secrets normally.

By default Secrets are not encrypted at rest and are open to attack, either via the etcd server or via backups of etcd data. To mitigate this risk, use an external secret management system with a KMS plugin to encrypt Secrets stored in etcd.

System architecture

Architecture

  1. ExternalSecrets are added in the cluster (e.g., kubectl apply -f external-secret-example.yml)
  2. Controller fetches ExternalSecrets using the Kubernetes API
  3. Controller uses ExternalSecrets to fetch secret data from external providers (e.g, AWS Secrets Manager)
  4. Controller upserts Secrets
  5. Pods can access Secrets normally

How to use it

Install with Helm

The official helm chart can be used to create the kubernetes-external-secrets resources and Deployment on a Kubernetes cluster using the Helm package manager.

$ helm repo add external-secrets https://external-secrets.github.io/kubernetes-external-secrets/
$ helm install [RELEASE_NAME] external-secrets/kubernetes-external-secrets

For more details about configuration see the helm chart docs

Install with kubectl

If you don't want to install helm on your cluster and just want to use kubectl to install kubernetes-external-secrets, you could get the helm client cli first and then use the following sample command to generate kubernetes manifests:

$ helm template --include-crds --output-dir ./output_dir external-secrets/kubernetes-external-secrets

The generated kubernetes manifests will be in ./output_dir and can be applied to deploy kubernetes-external-secrets to the cluster.

Secrets Manager access

For kubernetes-external-secrets to be able to retrieve your secrets it will need access to your secret backend.

AWS based backends

Access to AWS secrets backends (SSM & secrets manager) can be granted in various ways:

  1. Granting your nodes explicit access to your secrets using the node instance role (easy for experimentation, not recommended)

  2. IAM roles for service accounts.

  3. Per pod IAM authentication: kiam or kube2iam.

  4. Directly provide AWS access credentials to the kubernetes-external-secrets pod by environmental variables.

Optionally configure custom endpoints using environment variables

  • AWS_SM_ENDPOINT - Useful to set endpoints for FIPS compliance.
  • AWS_STS_ENDPOINT - Useful to set endpoints for FIPS compliance or regional latency.
  • AWS_SSM_ENDPOINT - Useful to set endpoints for FIPS compliance or custom VPC endpoint.
Using AWS access credentials

Set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY env vars in the kubernetes-external-secrets session/pod. You can use envVarsFromSecret in the helm chart to create these env vars from existing k8s secrets.

Additionally, you can specify a roleArn which will be assumed before retrieving the secret. You can limit the range of roles which can be assumed by this particular namespace by using annotations on the namespace resource. The annotation key is configurable (see above). The annotation value is evaluated as a regular expression and tries to match the roleArn.

kind: Namespace
metadata:
  name: iam-example
  annotations:
    # annotation key is configurable
    iam.amazonaws.com/permitted: "arn:aws:iam::123456789012:role/.*"

Add a secret

Add your secret data to your backend. For example, AWS Secrets Manager:

aws secretsmanager create-secret --name hello-service/password --secret-string "1234"

AWS Parameter Store:

aws ssm put-parameter --name "/hello-service/password" --type "String" --value "1234"

and then create a hello-service-external-secret.yml file:

apiVersion: "kubernetes-client.io/v1"
kind: ExternalSecret
metadata:
  name: hello-service
spec:
  backendType: secretsManager
  # optional: specify role to assume when retrieving the data
  roleArn: arn:aws:iam::123456789012:role/test-role
  data:
    - key: hello-service/password
      name: password
  # optional: specify a template with any additional markup you would like added to the downstream Secret resource.
  # This template will be deep merged without mutating any existing fields. For example: you cannot override metadata.name.
  template:
    metadata:
      annotations:
        cat: cheese
      labels:
        dog: farfel

or

apiVersion: "kubernetes-client.io/v1"
kind: ExternalSecret
metadata:
  name: hello-service
spec:
  backendType: systemManager
  data:
    - key: /hello-service/password
      name: password

The following IAM policy allows a user or role to access parameters matching prod-*.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "ssm:GetParameter",
      "Resource": "arn:aws:ssm:us-west-2:123456789012:parameter/prod-*"
    }
  ]
}

The IAM policy for Secrets Manager is similar (see docs):

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "secretsmanager:GetResourcePolicy",
        "secretsmanager:GetSecretValue",
        "secretsmanager:DescribeSecret",
        "secretsmanager:ListSecretVersionIds"
      ],
      "Resource": [
        "arn:aws:secretsmanager:us-west-2:111122223333:secret:aes128-1a2b3c",
        "arn:aws:secretsmanager:us-west-2:111122223333:secret:aes192-4D5e6F",
        "arn:aws:secretsmanager:us-west-2:111122223333:secret:aes256-7g8H9i"
      ]
    }
  ]
}

Save the file and run:

kubectl apply -f hello-service-external-secret.yml

Wait a few minutes and verify that the associated Secret has been created:

kubectl get secret hello-service -o=yaml

The Secret created by the controller should look like:

apiVersion: v1
kind: Secret
metadata:
  name: hello-service
  annotations:
    cat: cheese
  labels:
    dog: farfel
type: Opaque
data:
  password: MTIzNA==

Create secrets of other types than opaque

You can override ExternalSecret type using template, for example:

apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
  name: hello-docker
spec:
  backendType: systemManager
  template:
    type: kubernetes.io/dockerconfigjson
  data:
    - key: /hello-service/hello-docker
      name: .dockerconfigjson

Templating

Kubernetes External Secrets supports templating in ExternalSecret using lodash.template.

Template is applied to all ExternalSecret.template sections of the manifest. Data retrieved from secure backend is available via the data variable. Additonal object yaml of instance of js-yaml is available in lodash templates. It can be leveraged for easier YAML content manipulation.

Templating can be used for:

  • Generating K8S Secret keys:
    • upserting plain text via ExternalSecret.template.stringData
    • upserting base64 encoded content ExternalSecret.template.data
  • For creating dynamic labels, annotations and other fields available in K8S Secret object.

To demonstrate templating functionality let's assume the secure backend, e.g. Hashicorp Vault, contains the following data

kv/extsec/secret1 kv/extsec/secret2
{
  "intKey": 11,
  "objKey": {
    "strKey": "hello world"
  }
}
{
  "arrKey": [1, 2, 3]
}

Then, one could create the following ExternalSecret

apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
  name: tmpl-ext-sec
spec:
  backendType: vault
  data:
    - key: kv/data/extsec/secret1
      name: s1
    - key: kv/data/extsec/secret2
      name: s2
  kvVersion: 2
  template:
    data:
      file.txt: |
        <%= Buffer.from(JSON.stringify(JSON.parse(data.s1).objKey)).toString("base64") %>
    metadata:
      labels:
        label1: <%= JSON.parse(data.s1).intKey %>
        label2: <%= JSON.parse(data.s1).objKey.strKey.replace(" ", "-") %>
    stringData:
      file.yaml: |
        <%= yaml.dump(JSON.parse(data.s1)) %>
        <% let s2 = JSON.parse(data.s2) %><% s2.arrKey.forEach((e, i) => { %>arr_<%= i %>: <%= e %>
        <% }) %>`
  vaultMountPoint: kubernetes
  vaultRole: demo

After applying this ExternalSecret to the K8S cluster, the operator will generate following Secret

apiVersion: v1
data:
  file.txt: eyJzdHJLZXkiOiJoZWxsbyB3b3JsZCJ9
  file.yaml: aW50S2V5OiAxMQpvYmpLZXk6CiAgc3RyS2V5OiBoZWxsbyB3b3JsZAoKYXJyXzA6IDEKYXJyXzE6IDIKYXJyXzI6IDMKYAo=
  s1: eyJpbnRLZXkiOjExLCJvYmpLZXkiOnsic3RyS2V5IjoiaGVsbG8gd29ybGQifX0=
  s2: eyJhcnJLZXkiOlsxLDIsM119
kind: Secret
metadata:
  name: tmpl-ext-sec
  labels:
    label1: "11"
    label2: hello-world
type: Opaque

Resulting Secret could be inspected to see that result is generated by lodash templating engine

$ kubectl get secret/tmpl-ext-sec -ogo-template='{{ index .data "s1" | base64decode }}'
{"intKey":11,"objKey":{"strKey":"hello world"}}

$ kubectl get secret/tmpl-ext-sec -ogo-template='{{ index .data "s2" | base64decode }}'
{"arrKey":[1,2,3]}

$ kubectl get secret/tmpl-ext-sec -ogo-template='{{ index .data "file.txt" | base64decode }}'
{"strKey":"hello world"}

$ kubectl get secret/tmpl-ext-sec -ogo-template='{{ index .data "file.yaml" | base64decode }}'
intKey: 11
objKey:
  strKey: hello world

arr_0: 1
arr_1: 2
arr_2: 3

$ kubectl get secret/tmpl-ext-sec -ogo-template='{{ .metadata.labels }}'
map[label1:11 label2:hello-world]

Scoping access

Using Namespace annotation

Enforcing naming conventions for backend keys could be done by using namespace annotations. By default an ExternalSecret may access arbitrary keys from the backend e.g.

data:
  - key: /dev/cluster1/core-namespace/hello-service/password
    name: password

An enforced naming convention helps to keep the structure tidy and limits the access according to your naming schema.

Configure the schema as a regular expression in the namespace using an annotation. This allows ExternalSecrets in core-namespace only access to secrets that start with /dev/cluster1/core-namespace/:

kind: Namespace
metadata:
  name: core-namespace
  annotations:
    # annotation key is configurable
    externalsecrets.kubernetes-client.io/permitted-key-name: "/dev/cluster1/core-namespace/.*"

Using ExternalSecret controller config

ExternalSecret config allows scoping the access of kubernetes-external-secrets controller. This allows deployment of multiple kubernetes-external-secrets instances in the same cluster and each instance can access a set of predefined namespaces.

To enable this option, set the env var in the controller side to a list of namespaces:

env:
  WATCHED_NAMESPACES: "default,qa,dev"

Using ExternalSecret config

ExternalSecret manifest allows scoping the access of kubernetes-external-secrets controller. This allows deployment of multiple kubernetes-external-secrets instances at the same cluster and each instance can access a set of ExternalSecrets.

To enable this option, set the env var in the controller side:

env:
  INSTANCE_ID: "dev-team-instance"

And in ExternalSecret side:

apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
  name: foo
spec:
  controllerId: 'dev-team-instance'
[...]

Please note

Scoping access by ExternalSecret config provides only a logical separation and it doesn't cover the security aspects. i.e it assumes that the security side is managed by another component like Kubernetes Network policies or Open Policy Agent.

Deprecations

A few properties have changed name overtime, we still maintain backwards compatbility with these but they will eventually be removed, and they are not validated using the CRD validation.

Old New
secretDescriptor spec
spec.type spec.template.type
spec.properties spec.data
backendType: secretManager backendType: secretsManager

Backends

kubernetes-external-secrets supports AWS Secrets Manager, AWS System Manager, Akeyless, Hashicorp Vault, Azure Key Vault, Google Secret Manager and Alibaba Cloud KMS Secret Manager.

AWS Secrets Manager

kubernetes-external-secrets supports both JSON objects ("Secret key/value" in the AWS console) or strings ("Plaintext" in the AWS console). Using JSON objects is useful when you need to atomically update multiple values. For example, when rotating a client certificate and private key.

When writing an ExternalSecret for a JSON object you must specify the properties to use. For example, if we add our hello-service credentials as a single JSON object:

aws secretsmanager create-secret --region us-west-2 --name hello-service/credentials --secret-string '{"username":"admin","password":"1234"}'

We can declare which properties we want from hello-service/credentials:

apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
  name: hello-service
spec:
  backendType: secretsManager
  # optional: specify role to assume when retrieving the data
  roleArn: arn:aws:iam::123456789012:role/test-role
  # optional: specify region
  region: us-east-1
  data:
    - key: hello-service/credentials
      name: password
      property: password
    - key: hello-service/credentials
      name: username
      property: username
    - key: hello-service/credentials
      name: password_previous
      # Version Stage in Secrets Manager
      versionStage: AWSPREVIOUS
      property: password
    - key: hello-service/credentials
      name: password_versioned
      # Version ID in Secrets Manager
      versionId: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
      property: password

alternatively you can use dataFrom and get all the values from hello-service/credentials:

apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
  name: hello-service
spec:
  backendType: secretsManager
  # optional: specify role to assume when retrieving the data
  roleArn: arn:aws:iam::123456789012:role/test-role
  # optional: specify region
  region: us-east-1
  dataFrom:
    - hello-service/credentials

dataFrom by default retrieves the latest (AWSCURRENT) version of the backend secret, if you want to get values in bulk of a specific version, you can use dataFromWithOptions:

apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
  name: hello-service
spec:
  backendType: secretsManager
  # optional: specify role to assume when retrieving the data
  roleArn: arn:aws:iam::123456789012:role/test-role
  # optional: specify region
  region: us-east-1
  dataFromWithOptions:
    - key: hello-service/credentials
      versionStage: AWSPREVIOUS
    - key: hello-service/credentials
      versionId: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

data, dataFrom and dataFromWithOptions can of course be combined, any naming conflicts will use the last defined.
In the below example data takes precedence over dataFromWithOptions and dataFrom.

apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
  name: hello-service
spec:
  backendType: secretsManager
  # optional: specify role to assume when retrieving the data
  roleArn: arn:aws:iam::123456789012:role/test-role
  # optional: specify region
  region: us-east-1
  dataFrom:
    - hello-service/credentials
  dataFromWithOptions:
    - key: hello-service/credentials
      versionId: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
  data:
    - key: hello-service/migration-credentials
      name: password
      property: password

AWS SSM Parameter Store

You can scrape values from SSM Parameter Store individually or by providing a path to fetch all keys inside.

When fetching all keys by path, you can also recursively scrape all the sub paths (child paths) if you need to. The default is not to scrape child paths.

apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
  name: hello-service
spec:
  backendType: systemManager
  # optional: specify role to assume when retrieving the data
  roleArn: arn:aws:iam::123456789012:role/test-role
  # optional: specify region
  region: us-east-1
  data:
    - key: /foo/name
      name: fooName
    - path: /extra-people/
      recursive: false

data and dataFrom retrieve the latest version of the parameter by default. If you want to get values for a specific version, you can append the version number to the key:

apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
  name: hello-service
spec:
  backendType: systemManager
  # optional: specify role to assume when retrieving the data
  roleArn: arn:aws:iam::123456789012:role/test-role
  # optional: specify region
  region: us-east-1
  dataFrom:
    - hello-service/credentials:3
  data:
    - key: /foo/name
      name: fooName:5

Akeyless Vault

kubernetes-external-secrets supports fetching secrets from Akeyless Vault, . You will need to set the following environment variables:

env:
  #akeyless rest-v2 endpoint 
  AKEYLESS_API_ENDPOINT:  https://api.akeyless.io 
  AKEYLESS_ACCESS_ID: 
  #AKEYLESS_ACCESS_TYPE can be one of the following: aws_iam/azure_ad/gcp/access_key
  AKEYLESS_ACCESS_TYPE: 
  #AKEYLESS_ACCESS_TYPE_PARAM can be one of the following: gcp-audience/azure-obj-id/access-key
  #AKEYLESS_ACCESS_TYPE_PARAM:

Once you have kubernetes-external-secrets installed, you can create an external secret with YAML like the following:

apiVersion: 'kubernetes-client.io/v1'
kind: ExternalSecret
metadata:
  name: hello-secret
spec:
  backendType: akeyless
  data:
    - key: path/secret-name
      name: password

Hashicorp Vault

kubernetes-external-secrets supports fetching secrets from Hashicorp Vault, using the Kubernetes authentication method.

env:
  VAULT_ADDR: https://vault.domain.tld
  DEFAULT_VAULT_MOUNT_POINT: "k8s-auth" # optional, default value to be used if not specified in the ExternalSecret
  DEFAULT_VAULT_ROLE: "k8s-auth-role" # optional, default value to be used if not specified in the ExternalSecret

You will need to set the VAULT_ADDR environment variables so that kubernetes-external-secrets knows which endpoint to connect to, then create ExternalSecret definitions as follows:

apiVersion: "kubernetes-client.io/v1"
kind: ExternalSecret
metadata:
  name: hello-vault-service
spec:
  backendType: vault
  # Your authentication mount point, e.g. "kubernetes"
  # Overrides cluster DEFAULT_VAULT_MOUNT_POINT
  vaultMountPoint: my-kubernetes-vault-mount-point
  # The vault role that will be used to fetch the secrets
  # This role will need to be bound to kubernetes-external-secret's ServiceAccount; see Vault's documentation:
  # https://www.vaultproject.io/docs/auth/kubernetes.html
  # Overrides cluster DEFAULT_VAULT_ROLE
  vaultRole: my-vault-role
  data:
    - name: password
      # The full path of the secret to read, as in `vault read secret/data/hello-service/credentials`
      key: secret/data/hello-service/credentials
      property: password
    # Vault values are matched individually. If you have several keys in your Vault secret, you will need to add them all separately
    - name: api-key
      key: secret/data/hello-service/credentials
      property: api-key

If you use Vault Namespaces (a Vault Enterprise feature) you can set the namespace to interact with via the VAULT_NAMESPACE environment variable.

The Vault token obtained by Kubernetes authentication will be renewed as needed. By default the token will be renewed three poller intervals (POLLER_INTERVAL_MILLISECONDS) before the token TTL expires. The default should be acceptable in most cases but the token renew threshold can also be customized by setting the VAULT_TOKEN_RENEW_THRESHOLD environment variable. The token renew threshold value is specified in seconds and tokens with remaining TTL less than this number of seconds will be renewed. In order to minimize token renewal load on the Vault server it is suggested that Kubernetes auth tokens issued by Vault have a TTL of at least ten times the poller interval so that they are renewed less frequently. A longer token TTL results in a lower token renewal load on Vault.

If Vault uses a certificate issued by a self-signed CA you will need to provide that certificate:

# Create secret with CA
kubectl create secret generic vault-ca --from-file=./ca.pem
# values.yaml
env:
  VAULT_ADDR: https://vault.domain.tld
  NODE_EXTRA_CA_CERTS: "/usr/local/share/ca-certificates/ca.pem"

filesFromSecret:
  certificate-authority:
    secret: vault-ca
    mountPath: /usr/local/share/ca-certificates

Azure Key Vault

kubernetes-external-secrets supports fetching secrets from Azure Key vault

You will need to set these env vars in the deployment of kubernetes-external-secrets:

  • AZURE_TENANT_ID
  • AZURE_CLIENT_ID
  • AZURE_CLIENT_SECRET

The SP configured will require get and list access policies on the AZURE_KEYVAULT_NAME.

apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
  name: hello-keyvault-service
spec:
  backendType: azureKeyVault
  keyVaultName: hello-world
  data:
    - key: hello-service/credentials
      name: password

Alibaba Cloud KMS Secret Manager

kubernetes-external-secrets supports fetching secrets from Alibaba Cloud KMS Secret Manager

create secret by using the aliyun-cli command below:

# you need to configure aliyun-cli with a valid RAM user and proper permission
aliyun kms CreateSecret --SecretName my_secret --SecretData P@ssw0rd --VersionId 001

You will need to set these env vars in the deployment of kubernetes-external-secrets:

  • ALICLOUD_ACCESS_KEY_ID
  • ALICLOUD_ACCESS_KEY_SECRET
  • ALICLOUD_ENDPOINT
apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
  name: hello-service
spec:
  backendType: alicloudSecretsManager
  # optional: specify role to assume using provided access key ID and access key secret when retrieving the data
  roleArn: acs:ram::{UID}:role/demo
  data:
    - key: hello-credentials1
      name: password
    - key: hello-credentials2
      name: username
      # Version Stage in Alibaba Cloud KMS Secrets Manager. Optional, default value is ACSCurrent
      versionStage: ACSCurrent

GCP Secret Manager

kubernetes-external-secrets supports fetching secrets from GCP Secret Manager

The external secret will poll for changes to the secret according to the value set for POLLER_INTERVAL_MILLISECONDS in env. Depending on the time interval this is set to you may incur additional charges as Google Secret Manager charges per a set number of API calls.

A service account is required to grant the controller access to pull secrets.

Add a secret

Add your secret data to your backend using GCP SDK :

echo -n '{"value": "my-secret-value"}' | gcloud secrets create my-gsm-secret-name --replication-policy="automatic" --data-file=-

If the secret needs to be updated :

echo -n '{"value": "my-secret-value-with-update"}' | gcloud secrets versions add my-gsm-secret-name --data-file=-
Deploy kubernetes-external-secrets using Workload Identity

Instructions are here: Enable Workload Identity. To enable workload identity on an existing cluster (which is not covered in that document), first enable it on the cluster like so:

gcloud container clusters update $CLUSTER_NAME --workload-pool=$PROJECT_NAME.svc.id.goog

Next enable workload metadata config on the node pool in which the pod will run:

gcloud beta container node-pools update $POOL --cluster $CLUSTER_NAME --workload-metadata-from-node=GKE_METADATA_SERVER

If enabling it only for a particular pool, make sure to add any relevant tolerations or affinities:

tolerations:
  - key: "name"
    operator: "Equal"
    effect: "NoExecute"
    value: "node-pool-taint"
  - key: "name"
    operator: "Equal"
    effect: "NoSchedule"
    value: "node-pool-taint"

affinity:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
        - matchExpressions:
            - key: cloud.google.com/gke-nodepool
              operator: In
              values:
                - node-pool

You can add an annotation which is needed for workload identity by passing it in via Helm:

serviceAccount:
  annotations:
    iam.gke.io/gcp-service-account: my-secrets-sa@$PROJECT.iam.gserviceaccount.com

Create the policy binding:

gcloud iam service-accounts add-iam-policy-binding --role roles/iam.workloadIdentityUser --member "serviceAccount:$CLUSTER_PROJECT.svc.id.goog[$SECRETS_NAMESPACE/kubernetes-external-secrets]" my-secrets-sa@$PROJECT.iam.gserviceaccount.com

Grant GCP service account access to secrets:

gcloud projects add-iam-policy-binding $PROJECT_ID --member=serviceAccount:my-secrets-sa@$PROJECT.iam.gserviceaccount.com --role=roles/secretmanager.secretAccessor
Deploy kubernetes-external-secrets using a service account key

Alternatively you can create and mount a kubernetes secret containing google service account credentials and set the GOOGLE_APPLICATION_CREDENTIALS env variable.

Create a Kubernetes secret called gcp-creds with a JSON keyfile from a service account with necessary credentials to access the secrets:

apiVersion: v1
kind: Secret
metadata:
  name: mysecret
type: Opaque
stringData:
  gcp-creds.json: |-
    $KEYFILE_CONTENT

Uncomment GOOGLE_APPLICATION_CREDENTIALS in the values file as well as the following section:

env:
  AWS_REGION: us-west-2
  POLLER_INTERVAL_MILLISECONDS: 10000  # Caution, setting this frequency may incur additional charges on some platforms
  LOG_LEVEL: info
  METRICS_PORT: 3001
  VAULT_ADDR: http://127.0.0.1:8200
  GOOGLE_APPLICATION_CREDENTIALS: /app/gcp-creds/gcp-creds.json

 filesFromSecret:
   gcp-creds:
     secret: gcp-creds
     mountPath: /app/gcp-creds

This will mount the secret at /app/gcp-creds/gcp-creds.json and make it available via the GOOGLE_APPLICATION_CREDENTIALS environment variable.

Usage

Once you have kubernetes-external-secrets installed, you can create an external secret with YAML like the following:

apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
  name: gcp-secrets-manager-example # name of the k8s external secret and the k8s secret
spec:
  backendType: gcpSecretsManager
  projectId: my-gsm-secret-project
  data:
    - key: my-gsm-secret-name # name of the GCP secret
      name: my-kubernetes-secret-name # key name in the k8s secret
      version: latest # version of the GCP secret
      property: value # name of the field in the GCP secret

The field "key" is the name of the secret in Google Secret Manager. The field "name" is the name of the Kubernetes secret this external secret will generate. The metadata "name" field is the name of the external secret in Kubernetes.

To retrieve external secrets, you can use the following command:

kubectl get externalsecrets -n $NAMESPACE

To retrieve the secrets themselves, you can use the regular:

kubectl get secrets -n $NAMESPACE

To retrieve an individual secret's content, use the following where "mysecret" is the key to the secret content under the "data" field:

kubectl get secret my-secret -o 'go-template={{index .data "mysecret"}}' | base64 -D

The secrets will persist even if the helm installation is removed, although they will no longer sync to Google Secret Manager.

IBM Cloud Secrets Manager

kubernetes-external-secrets supports fetching secrets from IBM Cloud Secrets Manager.

Create username_password secret by using the UI, CLI or API. The CLI option is illustrated below:

# You need to configure ibm cloud cli with a valid endpoint.
# If you're using plug-in version 0.0.8 or later, export the following variable.
export SECRETS_MANAGER_URL=https://{instanceid}.{region}.secrets-manager.appdomain.cloud

# If you're using plug-in version 0.0.6 or earlier, export the following variable.
export IBM_CLOUD_SECRETS_MANAGER_API_URL=https://{instance_ID}.{region}.secrets-manager.appdomain.cloud

ibmcloud secrets-manager secret-create --secret-type username_password \
  --metadata '{"collection_type": "application/vnd.ibm.secrets-manager.secret+json", "collection_total": 1}' \
  --resources '[{"name": "example-username-password-secret","description": "Extended description for my secret.","username": "user123","password": "cloudy-rainy-coffee-book"}]'

You will need to set these env vars in the deployment of kubernetes-external-secrets:

  • IBM_CLOUD_SECRETS_MANAGER_API_APIKEY
  • IBM_CLOUD_SECRETS_MANAGER_API_ENDPOINT
  • IBM_CLOUD_SECRETS_MANAGER_API_AUTH_TYPE
apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
  name: ibmcloud-secrets-manager-example
spec:
  backendType: ibmcloudSecretsManager
  data:
    # The guid id of the secret
    - key: <guid>
      name: username
      property: username
      secretType: username_password

Alternately, you can use keyByName on the spec to interpret keys as secret names, instead of IDs. Using names is slightly less efficient than using IDs, but it makes your ExternalSecrets more robust, as they are not tied to a particular instance of a secret in a particular instance of Secrets Manager:

apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
  name: ibmcloud-secrets-manager-example
spec:
  backendType: ibmcloudSecretsManager
  keyByName: true
  data:
    # The name of the secret
    - key: my-creds
      name: username
      property: username
      secretType: username_password

Binary Secrets

Most backends do not treat binary secrets any differently than text secrets. Since you typically store a binary secret as a base64-encoded string in the backend, you need to explicitly let the ExternalSecret know that the secret is binary, otherwise it will be encoded in base64 again. You can do that with the isBinary field on the key. This is necessary for certificates and other secret binary files.

apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
  name: hello-service
spec:
  backendType: anySupportedBackend
  # ...
  data:
    - key: hello-service/archives/secrets_zip
      name: secrets.zip
      isBinary: true # Default: false
    # also works with `property`
    - key: hello-service/certificates
      name: cert.p12
      property: cert.p12
      isBinary: true

AWS Secrets Manager is a notable exception to this. If you create/update a secret using SecretBinary parameter of the API, then AWS API will return the secret data as SecretBinary in the response and ExternalSecret will handle it accordingly. In that case, you do not need to use the isBinary field.

Note that SecretBinary parameter is not available when using the AWS Secrets Manager console. For any binary secrets (represented by a base64-encoded strings) created/updated via the AWS console, or stored in key-value pairs instead of text strings, you can just use the isBinary field explicitly as above.

Metrics

kubernetes-external-secrets exposes the following metrics over a prometheus endpoint:

Metric Type Description Example
kubernetes_external_secrets_sync_calls_count Counter Number of sync operations by backend, secret name and status kubernetes_external_secrets_sync_calls_count{name="foo",namespace="example",backend="foo",status="success"} 1
kubernetes_external_secrets_last_sync_call_state Gauge State of last sync call of external secret, where -1 means the last sync_call was an error and 1 means the last sync_call was a success kubernetes_external_secrets_last_sync_call_state{name="foo",namespace="example",backend="foo"} 1

Development

Minikube is a tool that makes it easy to run a Kubernetes cluster locally.

Start minikube and the daemon. This creates the CustomerResourceDefinition, and starts to process ExternalSecrets:

minikube start

npm run nodemon

Development with localstack

Localstack mocks AWS services locally so you can test without connecting to AWS.

Run localstack in a separate terminal window

npm run localstack

Start minikube as above

minikube start

Run the daemon with localstack

npm run local

Add secrets using the AWS cli (example)

AWS_ACCESS_KEY_ID=foobar AWS_SECRET_ACCESS_KEY=foobar aws --region=us-west-2 --endpoint-url=http://localhost:4584 secretsmanager create-secret --name hello-service/password --secret-string "1234"

Related projects

khcheck-external-secrets

khcheck-external-secrets is a kuberhealthy check that monitors if the external secrets operator is functional.

kubernetes-external-secrets's People

Contributors

aabouzaid avatar arruzk avatar aslafy-z avatar bchrobot avatar davidcorbin avatar davidholsgrove avatar dependabot[bot] avatar ericabramov avatar flydiverny avatar greenkeeper[bot] avatar ianomaly avatar jacopodaeli avatar jdamata avatar justinas-b avatar jxpearce-godaddy avatar keweilu avatar klu6-godaddy avatar mailtokun avatar megakid avatar moolen avatar muenchhausen avatar nbendafi-yseop avatar nick-triller avatar pluies avatar renovate[bot] avatar rimitchell avatar sbose78 avatar silasbw avatar snyk-bot avatar vladlosev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes-external-secrets's Issues

Option to run controller outside cluster

@kelseyhightower suggests a way to improve when deployed across multiple clusters (see attached quote) by running the controller in a control plane outside the cluster. This has resource consumption benefits and potential security benefits (e.g., kubernetes cluster doesn't have a role to access secrets manager, but a locked down external control plane does).

You have a few options on how to leverage External Secrets in this context. The obvious approach is to deploy the "control plane" across every cluster. There are pros and cons. While you have the ability to do a canary roll out of the External Secrets control plane across multiple clusters, you also take up compute resource across every cluster, and must store the credentials required for the External Secrets control plane to sync secrets from an external store.

The other option would be to enable the External Secrets control plane to live outside of any individual Kubernetes cluster and work as a "global control plane". In this configuration the External Secrets control plane can push secrets across multiple clusters. Users can still define ExternalSecrets objects in each cluster, but you now have the ability to support "global" ExternalSecrets objects which are replicated to every cluster, or a limited set of clusters and namespaces based on configuration.

You can still host the External Secrets control plane using Kubernetes, but in a "admin" cluster, which is separate from the clusters where normal applications are deployed.

Option to "restart" Pods when ExternalSecrets are updated

It is useful to reload the process running inside a container when an ExternalSecret is updated (e.g., username/password rotation). An API to facilitate this feature has been discussed at length in other places (e.g., kubernetes/kubernetes#24957). In the meantime, there are some hackish approaches that might work well enough:

  • Add an option to ExternalSecrets to declare that Pods should be deleted after an ExternalSecret update (not safe for all types of Pod usage)
  • Add an option to exec a CLI tool in all containers that depend on the ExternalSecret/Secret (exposing exec to the controller seems like a step backwards for security).
  • ...

The first step in making progress on this issue is probably to think through the options above (and maybe others) and augment this issue with a proposal for how to proceed.

Trigger from CloudWatch Events -> SNS topic or Lambda rather than polling

Polling every external secret seems like a lot of unnecessary transactions.

SSM will tell you when a secret is updated via a CloudWatch Event. That event can either:

  1. Pass the event to an SNS topic, which kubernetes-external-secrets is subscribed to
  2. Pass the event to a Lambda function that could make a webhook call to a kubernetes-exernal-event endpoint.

kubernetes-external-secretswould still poll on start-up, and maybe once every hour or two, just in case.

feature request: create secret with every entry in secret manager key

Would it be possible to add an extra section under External Secrets that will just import all key/values under the stored AWS key and add them to a Kubernetes secret?

I would like to avoid having to explicitly list out every key and property every time I update or add a new k/v to a AWS secret.

.kube/config missing after kubectl applying exerternal-secrets.yml

After kubectl applying the example deployment file external-secrets.yml, it keeps complaining about not being able to find the kubeconfig file (see log below from pod).

I've seen that in commit #70 this file (that I'm still using) was removed. Is this way of deploying not supported anymore? Is there something else I could check?
The proposed Helm chart is basically just a templated version of this deleted file, so I guess I will get the same type of error.

npm info it worked if it ends with ok
npm info using [email protected]
npm info using [email protected]
npm info lifecycle [email protected]~prestart: [email protected]
npm info lifecycle [email protected]~start: [email protected]
 > [email protected] start /app
> ./bin/daemon.js
 fs.js:115
    throw err;
    ^
 Error: ENOENT: no such file or directory, open '/home/node/.kube/config'
    at Object.openSync (fs.js:439:3)
    at Object.readFileSync (fs.js:344:35)
    at cfgPaths.map.cfgPath (/app/node_modules/kubernetes-client/lib/config.js:221:37)
    at Array.map (<anonymous>)
    at loadKubeconfig (/app/node_modules/kubernetes-client/lib/config.js:220:28)
    at Object.fromKubeconfig (/app/node_modules/kubernetes-client/lib/config.js:62:33)
    at Object.<anonymous> (/app/config/index.js:17:34)
    at Module._compile (internal/modules/cjs/loader.js:689:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:700:10)
    at Module.load (internal/modules/cjs/loader.js:599:32)
npm info lifecycle [email protected]~start: Failed to exec start script
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] start: `./bin/daemon.js`
npm ERR! Exit status 1
npm ERR! 
npm ERR! Failed at the [email protected] start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm timing npm Completed in 523ms
 npm ERR! A complete log of this run can be found in:
npm ERR!     /home/node/.npm/_logs/2019-05-27T07_56_42_702Z-debug.log

edit: for completion sake, I converted the deleted file to terraform code using the kubernetes provider (see below), but still get the same error.

resource "kubernetes_cluster_role_binding" "ext-secret-cluster-role-binding" {
  metadata {
    name = "kubernetes-external-secrets-cluster-role-binding"
  }
  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind = "ClusterRole"
    name = "kubernetes-external-secrets-cluster-role"
  }
  subject {
    kind = "ServiceAccount"
    name = "kubernetes-external-secrets-service-account"
    namespace = "kubernetes-external-secrets"
  }
}

resource "kubernetes_cluster_role" "ext-secret-cluster-role" {
  metadata {
    name = "kubernetes-external-secrets-cluster-role"
  }

  rule {
    api_groups = [""]
    resources = ["secrets"]
    verbs = ["create", "update"]
  }
  rule {
    api_groups = ["apiextensions.k8s.io"]
    resources = ["customresourcedefinitions"]
    verbs = ["create"]
  }
  rule {
    api_groups = ["apiextensions.k8s.io"]
    resources = ["customresourcedefinitions"]
    resource_names = ["externalsecrets.kubernetes-client.io"]
    verbs = ["get", "update"]
  }
  rule {
    api_groups = ["kubernetes-client.io"]
    resources = ["externalsecrets"]
    verbs = ["get", "watch", "list"]
  }
}

resource "kubernetes_namespace" "ext-secret-namespace" {
  metadata {
    name = "kubernetes-external-secrets"
  }
}

resource "kubernetes_service_account" "ext-secret-service-account" {
  metadata {
    name = "kubernetes-external-secrets-service-account"
    namespace = "kubernetes-external-secrets"
  }
}

resource "kubernetes_deployment" "ext-secret-deployment" {
  "metadata" {
    labels {
      name = "kubernetes-external-secrets"
    }
    name = "kubernetes-external-secrets"
    namespace = "kubernetes-external-secrets"
  }
  "spec" {
    replicas = "1"
    selector {
      match_labels {
        name = "kubernetes-external-secrets"
      }
    }
    "template" {
      "metadata" {
        labels {
          name = "kubernetes-external-secrets"
          service = "kubernetes-external-secrets"
        }
      }
      "spec" {
        service_account_name = "kubernetes-external-secrets-service-account"
        container {
          name = "kubernetes-external-secrets"
          image_pull_policy = "Always"
          image = "godaddy/kubernetes-external-secrets:1.2.0"
        }
      }
    }
  }
}

Restrict controller network access

One potential security improvement is to restrict ingress and egress from the controller using a NetworkPolicy object. The controller needs access only to the secret backends it has been configured with and the local Kubernetes API server.

A first version of adding support for this would be to include a NetworkPolicy manifest in this repo designed for AWS Secretes Manager.

Allow option to restart if do not have permission to access secrets

If the kiam service is not running, then the external secrets will start without permission to access secrets.

At present, external-secrets will log a message.

If kiam then starts, the external-secrets pods will retain the old, incorrect permissions.

To enable the system to self-correct, please add an option that has the external-secrets service to restart if it does not have permission to access secrets.

AccessKey and SecretKey location

Does anybody know where to put access_key and secret_access_key ?? I am getting below error:

{"level":50,"time":1561451711429,"pid":17,"hostname":"kubernetes-external-secrets-f99689c5d-kml69","msg":"failure while polling the secrets","v":1}
{"level":50,"time":1561451711429,"pid":17,"hostname":"kubernetes-external-secrets-f99689c5d-kml69","type":"Error","stack":"CredentialsError: Missing credentials in config\n at IncomingMessage.<anonymous> (/app/node_modules/aws-sdk/lib/util.js:865:34)\n at IncomingMessage.emit (events.js:194:15)\n at IncomingMessage.EventEmitter.emit (domain.js:441:20)\n at endReadableNT (_stream_readable.js:1103:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)","message":"Missing credentials in config","retryable":false,"time":"2019-06-25T08:35:11.429Z","code":"CredentialsError","originalError":{"message":"Could not load credentials from any providers","retryable":false,"time":"2019-06-25T08:35:11.429Z","code":"CredentialsError"},"msg":"Missing credentials in config","v":1}
{"level":30,"time":1561451721412,"pid":17,"hostname":"kubernetes-external-secrets-f99689c5d-kml69","msg":"stopping and removing poller aws-s3-test-inside-k8s_242636","v":1}
{"level":30,"time":1561451721412,"pid":17,"hostname":"kubernetes-external-secrets-f99689c5d-kml69","msg":"stopping poller","v":1}
{"level":30,"time":1561451721412,"pid":17,"hostname":"kubernetes-external-secrets-f99689c5d-kml69","msg":"spinning up poller {\"id\":\"aws-s3-test-inside-k8s_242636\",\"namespace\":\"default\",\"secretDescriptors\":[{\"backendType\":\"secretsManager\",\"data\":[{\"key\":\"aws-s3-test-inside-k8s\",\"name\":\"aws-s3-test-inside-k8s\"}],\"name\":\"aws-s3-test-inside-k8s\"}],\"ownerReference\":{\"apiVersion\":\"kubernetes-client.io/v1\",\"controller\":true,\"kind\":\"ExternalSecret\",\"name\":\"aws-s3-test-inside-k8s\",\"uid\":\"cef0de83-9702-11e9-a5e5-42010a8000aa\"}}","v":1}
{"level":30,"time":1561451721412,"pid":17,"hostname":"kubernetes-external-secrets-f99689c5d-kml69","msg":"starting poller","v":1}
{"level":30,"time":1561451731421,"pid":17,"hostname":"kubernetes-external-secrets-f99689c5d-kml69","msg":"running poll","v":1}
{"level":30,"

Support to have IAM user Assume a Role when retrieving secret data

External Secrets could better sever multi-account setups if it allows the IAM User specified to retrieve secrets not only based on an IAM User but also an IAM Role.

The following example goes so far as to allow you to specify a role for each value retrieval. Probably all retrievals within a single secret will generally come from the same AWS account but this suggestion allows for maximum flexibility in case you need a secret comprised of secrets stored from multiple accounts. The example of username/password would never be the case but I left it with the basic example. Even just having a roleArn filed that is used for the entire ExternalSecret would be just fine. The IAM user would just have to have permissions to assume these roles and you're good.

apiVersion: 'kubernetes-client.io/v1'
kind: ExternalSecret
metadata:
  name: hello-service
secretDescriptor:
  backendType: secretsManager
  data:
    - key: hello-service/credentials
      name: password
      property: password
      roleArn: arn:aws:iam::123456789012:role/somerole
    - key: hello-service/credentials
      name: username
      property: username
      roleArn: arn:aws:iam::210987654321:role/somerole

The default behavior, should roleArn not be present, should be to just use the IAM creds directly without assuming a role.

OR

apiVersion: 'kubernetes-client.io/v1'
kind: ExternalSecret
metadata:
  name: hello-service
secretDescriptor:
  backendType: secretsManager
  roleArn: arn:aws:iam::123456789012:role/somerole
  data:
    - key: hello-service/credentials
      name: password
      property: password
    - key: hello-service/credentials
      name: username
      property: username

Avoid putting secrets to ETCD

Current implementation creates an etcd object with base64 encoded secret, which may potentially leak later.

Allow to put secrets to a volume, e.g. add a mutation webhook which adds an initContainer, which fetches & writes secrets to a volume shared with a container.

Pros:

  • no secrets in etcd
  • iam role can be granularly assigned to each initContainer (based on namespace annotation for example) when using https://github.com/uswitch/kiam , no need to give access to all secrets to the controller

Log message is incomplete

"msg":"fetching secret property KissmetricsToken"

does not tell me enough information to debug any issues. It would be helpful to see all the key name in Amazon Secrets Manager.

Feature request: ability to generate resulting secrets in a file format

In the example provided in your readme, the resulting secret generated by an external secret looks like so:

apiVersion: v1
kind: Secret
metadata:
  name: hello-service
type: Opaque
data:
  password: MTIzNA==

However, if you try to mount this secret object into a pod, what will happen is that the secret will get mounted as a directory, containing a file called password. If there were 2 keys that you pulled from ASM - e.g., username and password - then the directory would contain 2 files, username, and password.

This isn't really ideal from a usability standpoint. It would be preferable to generate a single file, say config.txt, that contains the 2 keys and values inside of it. E.g.:

username=foo
password=bar

This can accomplished with Kubernetes secrets, by using the "stringData" functionality as described here E.g.:

apiVersion: v1
kind: Secret
metadata:
  name: mysecret
type: Opaque
stringData:
  config.yaml: |-
    apiUrl: "https://my.api.com/api/v1"
    username: {{username}}
    password: {{password}}

This would make for a better use case, as it would then be possible to just mount a single file (config.yaml in this case) into a container.

For bonus points, it would additionally be nice to be able to specify the format of the resulting file - e.g., txt, json, etc.

Volume front end API

kubernetes-external-secrets currently has a single "front end" that writes backend data to Secret objects (and Pods manipulate the Secret objects in the usual ways). The goal of this issue is to define the API for a new frontend that wrties data to a Volume. This is part of the effort to avoid putting Secrets in ETCD.

The Volume frontend API must allow engineers to declare how kubernetes external secrets writes external secret data to a Volume. One approach is to use annotations on the Pod to identify which volumes in .spec.volumes represent an ExternalSecret:

kind: Pod
metadata:
  annotations:
    externalsecrets.kubernetes-client.io/volume/db-secrets: 'true'
    externalsecrets.kubernetes-client.io/volume/client-secrets: 'true'
spec:
  containers:
  - name: test
    image: busybox
    volumeMounts:
      - name: db-secrets
        mountPath: /db-secrets
      - name: client-secrets
        mountPath: /client-secrets
      - name: other-stuff
        mountPath: /stuff
  volumes:
  - name: db-secrets
    emptyDir:
      medium: "Memory"
  - name: client-secrets
    emptyDir:
      medium: "Memory"
  - name: other-stuff
    configMap:
      name: stuff-config

We should discuss the pros and cons of an approach like this and discuss other potential approaches.

Useless error message: cannot load secret marked for deletion

You don't say WHICH secret! :)

 {"level":50,"time":1562830012963,"pid":17,"hostname":"dev-external-secrets-kubernetes-external-secrets-794fd9688z2x2n","type":"Error","stack":"InvalidRequest │
│ Exception: You can’t perform this operation on the secret because it was marked for deletion.\n    at Request.extractError (/app/node_modules/aws-sdk/lib/pro │
│ tocol/json.js:51:27)\n    at Request.callListeners (/app/node_modules/aws-sdk/lib/sequential_executor.js:106:20)\n    at Request.emit (/app/node_modules/aws- │
│ sdk/lib/sequential_executor.js:78:10)\n    at Request.emit (/app/node_modules/aws-sdk/lib/request.js:683:14)\n    at Request.transition (/app/node_modules/aw │
│ s-sdk/lib/request.js:22:10)\n    at AcceptorStateMachine.runTo (/app/node_modules/aws-sdk/lib/state_machine.js:14:12)\n    at /app/node_modules/aws-sdk/lib/s │
│ tate_machine.js:26:10\n    at Request.<anonymous> (/app/node_modules/aws-sdk/lib/request.js:38:9)\n    at Request.<anonymous> (/app/node_modules/aws-sdk/lib/ │
│ request.js:685:12)\n    at Request.callListeners (/app/node_modules/aws-sdk/lib/sequential_executor.js:116:18)","message":"You can’t perform this operation o │
│ n the secret because it was marked for deletion.","code":"InvalidRequestException","time":"2019-07-11T07:26:52.963Z","requestId":"cf010d22-7f47-435f-9acb-19e │
│ 0044a384c","statusCode":400,"retryable":false,"retryDelay":31.423678272763624,"msg":"You can’t perform this operation on the secret because it was marked for │
│  deletion.","v":1}                                                                                                                                            │
│ {"level":30,"time":15

help: force secrets upserting eventhough POLLER_INTERVAL_MILLISECONDS is set to higher interval

Hi guys, we have noticed that if we set POLLER_INTERVAL_MILLISECONDS=86400000 (24 hours) external secrets are not pulled from aws secrets manager and upserted to secrets immediately after deployment.

Use Case: Our secrets are db connection string and passwords which are not expected to change very often so we wanted to have poller interval once a day. we havent found way to force externalsecrets deployment to poll and upsert secrets immediately helm upgrade. it waits for 24 hours for even first time polling. please let us know if you need more details. thanks.

Feature Request: Secret versioning and auto-update

One issue with the current approach is that it is impossible to know what version of a secret is currently deployed to a cluster nor whether the current secret is the one stored in the backing store.

I propose adding an opaque versionID field to the ExternalSecrets CRD (like ContainerSolutions controller) AND (most importantly) a watcher that updates Git (i.e. GitOps) when the backend detects a new version of the secret.

In this way, the secret is kept up to date AND one can tell which version of the secret is stored in any cluster that is watching that GitHub repo.

What is EVENTS_INTERVAL_MILLISECONDS?

The configuration documentation in README and helm chart refer to a EVENTS_INTERVAL_MILLISECONDS configuration, but nothing in the code seems to use it?

no_proxy environment variable is not getting honored

Hi - I am mounting environment variables as


spec:
      serviceAccountName: kubernetes-external-secrets-service-account
      containers:
        - image: "godaddy/kubernetes-external-secrets:1.2.0"
          imagePullPolicy: Always
          name: kubernetes-external-secrets
          envFrom:
            - configMapRef:
                name: proxy-environment-variables

My proxy-environment-variable config map is as follows


apiVersion: v1
kind: ConfigMap
metadata:
  name: proxy-environment-variables
  namespace: kubernetes-external-secrets
data:
  HTTPS_PROXY: my_redacted_proxy
  HTTP_PROXY: my_redacted_proxy
  http_proxy: my_redacted_proxy
  https_proxy: my_redacted_proxy
  NO_PROXY: my_redacted_no_proxy urls
  no_proxy: my_redacted_no_proxy urls

One of the no proxy cidr is 10.100.0.0/16

When i deploy the pod exists with the following message

{
    "log": "Error: Failed to get /openapi/v2 and /swagger.json: tunneling socket could not be established, statusCode=503
",
    "stream": "stderr",
    "docker": {
        "container_id": "052ffccfdf5a55a7cc22f58e5f9f01cc5e028804667962810cfc50c6573f9acc"
    },
    "kubernetes": {
        "container_name": "kubernetes-external-secrets",
        "namespace_name": "kubernetes-external-secrets",
        "pod_name": "kubernetes-external-secrets-775d45f74d-bjr55",
        "container_image": "godaddy/kubernetes-external-secrets:1.2.0",
        "container_image_id": "docker-pullable://godaddy/kubernetes-external-secrets@sha256:6439feeef5602ed0bd3fd635f0e403c1274b3198c9a59526c84a3437152ae245",
        "pod_id": "addbacb1-71b7-11e9-b8cb-02ef4f9bf35c",
        "labels": {
            "name": "kubernetes-external-secrets",
            "pod-template-hash": "775d45f74d",
            "service": "kubernetes-external-secrets"
        },
        "host": "ip-172-25-130-225.us-east-2.compute.internal",
        "master_url": "https://10.100.0.1:443/api",
        "namespace_id": "ad7f4034-71b7-11e9-b8cb-02ef4f9bf35c"
    }
}

I tail the proxy logs and find that these calls are going over proxy

1557340191.194 59985 172.25.130.225 TAG_NONE/503 0 CONNECT 10.100.0.1:443 - HIER_NONE/- -

Admission webhook server

Create an admission webhook server for kubernetes-external-secrets. This is part of the effort to avoid putting Secrets in ETCD.

With an admission webhook server we can use mutating admission webhooks to interpose on Pod creation and inject external secret data by adding an init container to Pods.

This issue tracks work to get a basic mutating admission webhook server running:

  • interposes on requests to create new Pods
  • identifies Pods with a .metadata.annotations['externalsecrets.kubernetes-client.io'] annotation (value can be anything, see Volume front end API discussion).
  • adds an init container that echos 'hello world'

Inconsistent error handling for JSON secrets

When using JSON formatted secrets and selecting properties in them the error handling seems inconsistent.

https://github.com/godaddy/kubernetes-external-secrets/blob/d04cf1de5e3793522f3d46a9b9be9f25413e5f15/lib/backends/kv-backend.js#L34-L41

If theres a parsing error its just swallowed with a warning log, while if the property doesn't exist it goes boom, shouldn't these be dealt with in the same way? Not being able to parse the secret seems worse to me than not finding the correct property, though they are probably equally big issues 😄

Failure while polling the secrets

I'm trying to get kubernetes-external-secrets to work, but it's currently failing and the error logs don't seem to show any useful clues as to why it's happening:

2019-05-16T00:22:18.787878009Z {"level":30,"time":1557966138787,"pid":16,"hostname":"kubernetes-external-secrets-5bd4b59b5d-bf8gj","msg":"running poll","v":1}
2019-05-16T00:22:18.78823308Z {"level":30,"time":1557966138788,"pid":16,"hostname":"kubernetes-external-secrets-5bd4b59b5d-bf8gj","msg":"fetching secret property github_access_token","v":1}
2019-05-16T00:22:18.882898059Z {"level":50,"time":1557966138879,"pid":16,"hostname":"kubernetes-external-secrets-5bd4b59b5d-bf8gj","msg":"failure while polling the secrets {}","v":1}
2019-05-16T00:22:28.791813549Z {"level":30,"time":1557966148791,"pid":16,"hostname":"kubernetes-external-secrets-5bd4b59b5d-bf8gj","msg":"running poll","v":1}
2019-05-16T00:22:28.791997103Z {"level":30,"time":1557966148791,"pid":16,"hostname":"kubernetes-external-secrets-5bd4b59b5d-bf8gj","msg":"fetching secret property github_access_token","v":1}
2019-05-16T00:22:28.797970739Z {"level":30,"time":1557966148797,"pid":16,"hostname":"kubernetes-external-secrets-5bd4b59b5d-bf8gj","msg":"stopping and removing poller github-access-token_9063402","v":1}
2019-05-16T00:22:28.798018978Z {"level":30,"time":1557966148797,"pid":16,"hostname":"kubernetes-external-secrets-5bd4b59b5d-bf8gj","msg":"stopping poller","v":1}

Any ideas?

I ssh'd into the pod, created a nodejs test script that fetches the secret, and it does work.

Clearer Documentation on property and name

The Readme shows following section
- key: hello-service/credentials
name: username
property: username

Please make clear, what name and what property ist. Maybe it would help if changing username to username_aws and username_kubernetes. Thanks in advance

Changelog for 1.3.1

Seems like changelog could use some more details for the changes in 1.3.1 from #107 (should probably have been a 1.4.0 as well)
Since it was squashed only the refactor made the changelog.
While it included a few other things for the changelog.

Unable to Fetch SSM Parameter

Hi, I'm attempting to use External Secrets to fetch a parameter(SecureString) stored in Systems Manager, but I am met with this error message

{"level":50,"time":1563397164220,"pid":16,"hostname":"external-secrets-766d65c4d9-frzb4","type":"Error","stack":"TypeError [ERR_INVALID_ARG_TYPE]: The first argument must be one of type string, Buffer, ArrayBuffer, Array, or Array-like Object. Received type undefined\n at Function.from (buffer.js:207:11)\n at externalData.forEach (/app/lib/backends/kv-backend.js:72:43)\n at Array.forEach ()\n at SystemManagerBackend.getSecretManifestData (/app/lib/backends/kv-backend.js:71:18)\n at process._tickCallback (internal/process/next_tick.js:68:7)","msg":"The first argument must be one of type string, Buffer, ArrayBuffer, Array, or Array-like Object. Received type undefined","v":1}

My secrets YAML is defined as

kind: ExternalSecret
metadata:
  name: test-token
  namesapce: default
secretDescriptor:
  backendType: systemManager
  data:
    - key: /path/to/test
      name: test

Fetching secrets from SecretsManager works as expected, but when attempting to fetch Parameters from SSM I am met with the error above.

SSM parameters not working

I am trying out the recently added SSM support (thank you @tmxak), however, the examples in the README are not working for me. I installed via helm, overriding the image tag to be 1.3.1 since the chart still installs 1.2.0 by default

Using the commands and files from the readme as-is and I see the following in the logs

{"level":30,"time":1564662741812,"pid":18,"hostname":"infra-kubernetes-external-secrets-5879f96844-jj8lz","msg":"loading kube specs","v":1} {"level":30,"time":1564662742016,"pid":18,"hostname":"infra-kubernetes-external-secrets-5879f96844-jj8lz","msg":"successfully loaded kube specs","v":1} {"level":30,"time":1564662742016,"pid":18,"hostname":"infra-kubernetes-external-secrets-5879f96844-jj8lz","msg":"updating CRD","v":1} {"level":30,"time":1564662742016,"pid":18,"hostname":"infra-kubernetes-external-secrets-5879f96844-jj8lz","msg":"Upserting custom resource externalsecrets.kubernetes-client.io","v":1} {"level":30,"time":1564662742088,"pid":18,"hostname":"infra-kubernetes-external-secrets-5879f96844-jj8lz","msg":"successfully updated CRD","v":1} {"level":30,"time":1564662742088,"pid":18,"hostname":"infra-kubernetes-external-secrets-5879f96844-jj8lz","msg":"starting app","v":1} Thu, 01 Aug 2019 12:32:22 GMT kubernetes-client deprecated .getStream see https://github.com/godaddy/kubernetes-client/blob/master/merging-with-kubernetes.md at lib/external-secret.js:40:10 {"level":30,"time":1564662742091,"pid":18,"hostname":"infra-kubernetes-external-secrets-5879f96844-jj8lz","msg":"successfully started app","v":1} {"level":30,"time":1564662842812,"pid":18,"hostname":"infra-kubernetes-external-secrets-5879f96844-jj8lz","msg":"spinning up poller {\"id\":\"a4e32d63-b458-11e9-97b5-0ed027fcb918\",\"namespace\":\"tmp-jst\",\"secretDescriptor\":{\"backendType\":\"systemManager\",\"data\":[{\"key\":\"/hello-service/password\",\"name\":\"password\"}],\"name\":\"hello-service\"},\"ownerReference\":{\"apiVersion\":\"kubernetes-client.io/v1\",\"controller\":true,\"kind\":\"ExternalSecret\",\"name\":\"hello-service\",\"uid\":\"a4e32d63-b458-11e9-97b5-0ed027fcb918\"}}","v":1} {"level":30,"time":1564662842815,"pid":18,"hostname":"infra-kubernetes-external-secrets-5879f96844-jj8lz","msg":"starting poller","v":1} {"level":30,"time":1564662852820,"pid":18,"hostname":"infra-kubernetes-external-secrets-5879f96844-jj8lz","msg":"running poll","v":1} {"level":30,"time":1564662852821,"pid":18,"hostname":"infra-kubernetes-external-secrets-5879f96844-jj8lz","msg":"fetching secret property password","v":1} {"level":50,"time":1564662853292,"pid":18,"hostname":"infra-kubernetes-external-secrets-5879f96844-jj8lz","msg":"failure while polling the secrets","v":1} {"level":50,"time":1564662853292,"pid":18,"hostname":"infra-kubernetes-external-secrets-5879f96844-jj8lz","type":"Error","stack":"ParameterNotFound: null\n at Request.extractError (/app/node_modules/aws-sdk/lib/protocol/json.js:51:27)\n at Request.callListeners (/app/node_modules/aws-sdk/lib/sequential_executor.js:106:20)\n at Request.emit (/app/node_modules/aws-sdk/lib/sequential_executor.js:78:10)\n at Request.emit (/app/node_modules/aws-sdk/lib/request.js:683:14)\n at Request.transition (/app/node_modules/aws-sdk/lib/request.js:22:10)\n at AcceptorStateMachine.runTo (/app/node_modules/aws-sdk/lib/state_machine.js:14:12)\n at /app/node_modules/aws-sdk/lib/state_machine.js:26:10\n at Request.<anonymous> (/app/node_modules/aws-sdk/lib/request.js:38:9)\n at Request.<anonymous> (/app/node_modules/aws-sdk/lib/request.js:685:12)\n at Request.callListeners (/app/node_modules/aws-sdk/lib/sequential_executor.js:116:18)","message":null,"code":"ParameterNotFound","time":"2019-08-01T12:34:13.289Z","requestId":"6a949ea3-ef3d-4bca-89fe-aa47c309c4b8","statusCode":400,"retryable":false,"retryDelay":58.61510348117784,"msg":null,"v":1}

The parameter exists

aws ssm get-parameter --name "/hello-service/password"

{ "Parameter": { "Name": "/hello-service/password", "Type": "String", "Value": "1234", "Version": 2, "LastModifiedDate": 1564664235.796, "ARN": "arn:aws:ssm:us-east-1:XXXX:parameter/hello-service/password" } }

I confirmed the IAM role is working by revoking it's access to ssm:GetParameter which results in the following in the external-secrets logs
.
"msg":"User: arn:aws:sts::XXXX:assumed-role/k8s-parameter_store_readonly/kiam-kiam is not authorized to perform: ssm:GetParameter on resource: arn:aws:ssm:us-west-2:XXXX:parameter/hello-service/password"

Parameterize the `type` of the Secret manifest

The fstab/cifs FlexVolume plugin (among others) requires secrets to be created with type: fstab/cifs. However, the upserted secrets are all created with type: Opaque. This results in errors such as:

MountVolume.SetUp failed for volume "foo-volume-mount" : Couldn't get secret foo-project/volume-mount-secret. err: Cannot get secret of type fstab/cifs

It would be helpful to parameterize the type used in the final manifest, and allow that to flow through to the poller as needed.

Action required: Greenkeeper could not be activated 🚨

🚨 You need to enable Continuous Integration on Greenkeeper branches of this repository. 🚨

To enable Greenkeeper, you need to make sure that a commit status is reported on all branches. This is required by Greenkeeper because it uses your CI build statuses to figure out when to notify you about breaking changes.

Since we didn’t receive a CI status on the greenkeeper/initial branch, it’s possible that you don’t have CI set up yet. We recommend using Travis CI, but Greenkeeper will work with every other CI service as well.

If you have already set up a CI for this repository, you might need to check how it’s configured. Make sure it is set to run on all new branches. If you don’t want it to run on absolutely every branch, you can whitelist branches starting with greenkeeper/.

Once you have installed and configured CI on this repository correctly, you’ll need to re-trigger Greenkeeper’s initial pull request. To do this, please click the 'fix repo' button on account.greenkeeper.io.

Cross account secret retrieval

Hi. I'm trying to use the kubernetes-external-secrets to access some secrets from another AWS account. The trust and secrets retrieval works as expected using the aws cli but it fails with the kubernetes-external-secrets with the error:

{"level":50,"time":1559737055540,"pid":16,"hostname":"kubernetes-external-secrets-68cd796f7b-fjp67","msg":"failure while polling the secrets {\"message\":\"Invalid name. Must be a valid name containing alphanumeric characters, or any of the following: -/_+=.@!\",\"code\":\"ValidationException\",\"time\":\"2019-06-05T12:17:35.540Z\",\"requestId\":\"884b173f-xxxx-xxxx-xxxx-f916def6b609\",\"statusCode\":400,\"retryable\":false,\"retryDelay\":84.08545324727217}","v":1}

My deployment file looks like below:

---
apiVersion: 'kubernetes-client.io/v1'
kind: ExternalSecret
metadata:
  name: test-secret
secretDescriptor:
  backendType: secretsManager
  data:
    - key: "arn:aws:secretsmanager:eu-west-2:111111111110:secret"
      name: "secret100"

The aws cli pull looks like below:

aws secretsmanager get-secret-value --secret-id arn:aws:secretsmanager:eu-west-2:111111111110:secret:jenkins100 --region eu-west-2


{
    "Name": "secret100",
    "VersionId": "3397d6e1-xxxx-xxxx-xxxx-293cd1ea6f02",
    "SecretString": "mySecurePassword",
    "VersionStages": [
        "AWSCURRENT"
    ],
    "CreatedDate": 1559259096.015,
    "ARN": "arn:aws:secretsmanager:eu-west-2:111111111110:secret:secret100-jNpM3x"
}

Can I have please some help with this, I'm doing something wrong?

Thanks

AWS Region should be required

The config defaults to us-west-2 for the AWS region which seems arbitrary and caused some pain while troubleshooting. Instead, the AWS region should be set explicitly in the deployment. The documentation should be updated to specify where to modify the value if needed.

Support for IAM Roles for self managed cluster with Kiam

Hey Team,

I am working with IAM Roles to access the services of AWS. I am building a self-managed cluster on AWS with help of kops tool. All my app deployments uses IAM roles to access AWS Services using Kiam service.

I have added AWS Role in deployment configuration as follows.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    name: kubernetes-external-secrets
  name: kubernetes-external-secrets
  namespace: kubernetes-external-secrets
spec:
  replicas: 1
  selector:
    matchLabels:
      name: kubernetes-external-secrets
  template:
    metadata:
      labels:
        name: kubernetes-external-secrets
        service: kubernetes-external-secrets
      annotations:
        iam.amazonaws.com/role: arn:aws:iam::12312312312:role/Development-Secrets-Manager-Role
    spec:
      serviceAccountName: kubernetes-external-secrets-service-account
      containers:
        - image: "godaddy/kubernetes-external-secrets:1.2.2"
          imagePullPolicy: Always
          name: kubernetes-external-secrets
          env:
            - name: AWS_REGION
              value: ap-southeast-1

It doesn't work. It is throws errors like unable to find the credentials. But, It works with if I provide AWS_ACCESS_KEY_ID & AWS_SECRET_KEYs as environment vars to deployment.

Please can support or help if I am doing something wrong.

Create a Helm Chart

To make installation of the kubernetes-external-secrets controller and related resources a bit easier, a helm chart can be added to this repository in the first instance (and hopefully promoted to https://github.com/helm/charts/tree/master/incubator for public consumption).

This Helm Chart can also allow easy configuration of the environment variables for the container, like AWS_REGION and the events / poller intervals (addresses #51).

PodAnnotations can also be used to configure kube2iam role's to grant access to AWS Secret Manager for example.

Init Container / Volume frontend

Implement a merge-patch in the admission webhook server to inject an Init Container that implements the "memory" API. The Init Container example might provide some useful implementation details.

  • End-to-end implementation of "memory" frontend
  • Update documentation for using the memory frontend

After completing this work, kubernetes-external-secrets should have full support for a memory front end (see #46).

Support for Annotations

Support for Annotations

Thanks for the great piece of software. I'm looking to use external-secrets alongside kubernetes-replicator and in order to automatically duplicate my external secrets to different namespaces, they need to contain an annotation of:

    replicator.v1.mittwald.de/replication-allowed: "true"
    replicator.v1.mittwald.de/replication-allowed-namespaces: .*

However, when I create my external secret with these annotations, they are not carried over into the kubernetes secret object, preventing them from being replicated.

Please consider passing all/some annotations from the ExternalSecret object to the Secret object.

Steps to reproduce

Create an ExternalSecret with the annotations:

apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
  annotations:
    replicator.v1.mittwald.de/replication-allowed: "true"
    replicator.v1.mittwald.de/replication-allowed-namespaces: .*
  name: datadog
  namespace: default
secretDescriptor:
  backendType: secretsManager
  data:
  - key: datadog
    name: datadog-api-key
    property: datadog-api-key
  - key: datadog
    name: datadog-auth-token
    property: datadog-auth-token

See that the resulting Secret does not contain annotations:

apiVersion: v1
data:
  datadog-api-key: xxx
  datadog-auth-token: xxx
kind: Secret
metadata:
  creationTimestamp: "2019-07-28T16:26:16Z"
  name: datadog
  namespace: default
  ownerReferences:
  - apiVersion: kubernetes-client.io/v1
    controller: true
    kind: ExternalSecret
    name: datadog
    uid: 86859120-affc-11e9-a909-025f2055801a
  resourceVersion: "649857"
  selfLink: /api/v1/namespaces/default/secrets/datadog
  uid: 6c3f8ecf-b154-11e9-a909-025f2055801a
type: Opaque

No way to install via kubectl?

I was directed here from this blog: https://godaddy.github.io/2019/04/16/kubernetes-external-secrets/

I was looking into using this tool but noticed that recently deploying via kubectl was removed and helm is now the official way to install. Is there some type of workaround or plans to re-introduce deploying with kubectl? Having to install helm just to install this tool is a real bummer.

PR where helm was setup as the official way to install: #68
When external-secrets.yml was removed: #70
Someone else asked a similar question here: #72

Will this work on google container engine kubernetes?

On EKS I can just use roles but if Im on GKE I cant to that

Can I use the default AWS environment variables to set the secret access key and id so I can create an IAM user my gke clusters can use to get access to secrets manager?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.