Code Monkey home page Code Monkey logo

vault-k8s's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vault-k8s's Issues

add support for setting "command" on templates

Use case: Need to run a command (eg: send SIGHUP) when secrets are updated.

A useful example is nginx and fetching mTLS certs from Vault. After the cert is updated we need to send a SIGHUP to trigger nginx to reload the certs.

It looks like this is currently possible using the configmap escape hatch but this is a common enough case for us that having to use the configmap approach is significant friction.

Example of how this might look as annotations:

        vault.hashicorp.com/agent-inject-secret-tls.pem: "pki/issue/ou-nobody"
        vault.hashicorp.com/agent-inject-template-tls.pem: |
           {{- with secret "pki/issue/ou-nobody" "common_name=foo.example.com" "ttl=5m" }}
           {{ .Data.private_key }}
           {{ .Data.certificate }}
           {{ .Data.issuing_ca }}
           {{ end }}
        vault.hashicorp.com/agent-inject-command-tls.pem: "/bin/sh -c 'pkill -HUP nginx || true'"

  spec:
    # shared PID namespace so the vault-agent sidecar can send signals to nginx in the app container
    shareProcessNamespace: true

Using vault-k8s from a different cluster

We're currently hosting Vault in a dedicated Kubernetes cluster. Ideally, we'd like to use vault-k8s to authenticate service accounts from other clusters.

Obviously this would not work out of the box, as the Vault admissions controller is currently only located in the Vault cluster, and not in the other clusters that are actually running the applications.

Is there a recommended way of making this work? I'm currently thinking along these lines:

In Vault:

  • Configure Kubernetes auth , once per application cluster (e.g. vault auth enable --path kubernetes/$CLUSTER_NAME kubernetes). The config of each auth method would store credentials of a vault-auth service account from the relevant application cluster. Each vault-auth service account would have a ClusterRoleBinding to the system:auth-delegator ClusterRole in their cluster.

In each application cluster:

  • Deploy a Vault agent, with the injector enabled.
  • Create a ConfigMap along these lines, where vault.address is set to the address of our Vault instance, with relevant certs and keys included.

This way, IIUC, when I deploy a pod to the cluster that has the vault annotations, it will be caught by the admissions controller, which will then talk to our Vault instance, and attempt to authenticate the JWT from the application service account.

Is this valid? Or am I making things way to complex? What's the best way to authenticate an application running in a cluster that's separate to the cluster running Vault?

Support projected ServiceAccount volumes

The current implementation uses the workloads normal ServiceAccount token at /var/run/secrets/kubernetes.io/serviceaccount/token. This token does not expire and allows access to the Kubernetes API allowing Vault to impersonate the workload.

Support for projected ServiceAccount volumes would allow the token to be rotated and would mean that if the token were intercepted it could only be used to authenticate to Vault (rather than have wider impacts against the Kubernetes API).

vault-k8s and istio service mesh don't work together

I did the steps described here and it worked great.

The problem is when I add istio to the namespace. vault-agent-init container can't correctly start because there's no network available yet.

Is there a way to use just the vault-agent sidecar and not use the vault-agent-init container? Any configuration that can be done to execute the command from the vault-agent-init inside the vault-agent sidecar?

I found this comment in the container_init_sidecar.go code and I'm not sure if its safe to execute everything inside the sidecar container.

Define order of vault init container

I have an application which uses an init container to do database migrations. This init container wants to have secrets from vault in it.

The vault init container is injected, but it never starts because the db migration sidecar starts first.

Is it possible to get the vault init container to run first?

missing client token

Using the latest vault injector 0.2.0 I've created a simple test pod and provided the corresponding serviceAccountName, but vault-agent-init container for some reason gives an error:

auth.handler: error authenticating: error="Error making API request.

URL: PUT https://vault.vault.svc:8200/v1/clusters/services/login
Code: 400. Errors:

* missing client token" backoff=1.6070817979999998

I've checked that service account token is mounted and is available at /var/run/secrets/kubernetes.io/serviceaccount/token

What may be the reason? I don't see any configuration parameters related to token or SA.

Injector with external Vault server?

The scenario we want to support is to use a vault server which pre-exists the kubernetes cluster. We want the vault-k8s injector capability to talk to this vault server.

Is this scenario supported? I see that the injector command (vault-k8s/subcommand/injector/flags.go) takes a vault-address argument. And I see that deployment (vault-helm/templates/injector-deployment.yaml) sets the AGENT_INJECT_VAULT_ADDR env variable.

If the injector can be configured to use an external service, please document an example configuration that also disables installing vault as a service inside kubernetes.

usage of injector in deployments defined in helm chart

when defining an injection template annotation within a deployment defined within a helm chart, the following syntax is required due to golang templates using the same syntax (using one of the examples from the website):

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: app-example
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-example-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app-example
  template:
    metadata:
      labels:
        app: app-example
      annotations:
        vault.hashicorp.com/agent-inject: "true"
        vault.hashicorp.com/agent-inject-secret-db-creds: "database/creds/db-app"
        vault.hashicorp.com/agent-inject-template-db-creds: |
          # for helm we have to escape golang templates to use golang templates for vault
          {{ printf `{{- with secret \"database/creds/db-app\" -}}
          postgres://{{ .Data.username }}:{{ .Data.password }}@postgres:5432/appdb?sslmode=disable
          {{- end }}` }}
        vault.hashicorp.com/role: "db-app"
        vault.hashicorp.com/ca-cert: "/vault/tls/ca.crt"
        vault.hashicorp.com/client-cert: "/vault/tls/client.crt"
        vault.hashicorp.com/client-key: "/vault/tls/client.key"
        vault.hashicorp.com/tls-secret: "vault-tls-client"
    spec:
      containers:
        - name: app
          image: "app:1.0.0"
      serviceAccountName: app-example

this syntax does not seem immediately obvious to me, and I understand it would be difficult to change. Could a documentation example/info be added to cover info about using vault inject templates being used inside a helm chart?

Support structured json logging

Currently, only environmental settings for log_level are available.
It would be great if json logging get's supported, particularly as hclog supports it.

Configure default values for annotations globally

Some of the annotations such as vault.hashicorp.com/ca-cert or resource limits tend to be fixed across the cluster but would have to be configured on a per pod basis. Globally configurable default values would avoid the repetition and provide a clear separation between consumer and provider.

Issue with Vault HA setup

I did a setup of vault + consul HA setup using the following repos on my AWS.
https://github.com/hashicorp/vault-helm
https://github.com/hashicorp/consul-helm
Steps followed:

  1. Installed consul
    helm install consul ./consul-helm/
  2. Installed Vault
    helm install vault -f values-eks.yaml ./vault-helm/
    values-eks.yaml contains
cat >~/vault-eks/values-eks.yaml <<EOL
server:
  ha:
    enabled: true
    config: |
      ui = true

      listener "tcp" {
        tls_disable = 1
        address = "[::]:8200"
        cluster_address = "[::]:8201"
      }

      seal "awskms" {
        region     = "us-east-1"
        kms_key_id = "xxx"
      }

      storage "consul" {
        path = "vault"
        address = "HOST_IP:8500"
      }
EOL

Above steps booted 3 vault pods and 3 consul pods along with 3 consul servers spread evenly across 3 nodes.

  1. Initialised vault
    kubectl exec -it vault-0 -- vault operator init

  2. Unseal
    kubectl exec -it vault-0 -- vault operator unseal <unsealkey from step 3>

Modified Vault svc type to loadbalancer and got an ELB URL

  1. Installed vault binary in another machine and set these things
export VAULT_ADDR='http://elbdomain:8200'
export VAULT_SA_NAME=$(kubectl get sa vault -o jsonpath="{.secrets[*]['name']}")
 export VAULT_TOKEN="my initial token"
 export SA_JWT_TOKEN=$(kubectl get secret $VAULT_SA_NAME -o jsonpath="{.data.token}" | base64 --decode; echo)
export SA_CA_CRT=$(kubectl get secret $VAULT_SA_NAME -o jsonpath="{.data['ca\.crt']}" | base64 --decode; echo)
  1. Enabled Kubernetes auth mode
vault auth enable kubernetes
vault write auth/kubernetes/config token_reviewer_jwt="$SA_JWT_TOKEN" kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" kubernetes_ca_cert="$SA_CA_CRT"
  1. Create a Policy & Role:
cat <<EOF > ./app-policy.hcl
path "secret*" {
  capabilities = ["read"]
}
EOF

vault policy write app ./app-policy.hcl

vault write auth/kubernetes/role/myapp \
   bound_service_account_names=app \
   bound_service_account_namespaces=demo \
   policies=app \
   ttl=24h

 vault secrets enable -path=secret/ kv
vault kv put secret/helloworld username=foobaruser password=foobarbazpass

In my app deployment I added these annotations:

spec:
  template:
    metadata:
      annotations:
        vault.hashicorp.com/agent-inject: "true"
        vault.hashicorp.com/agent-inject-status: "update"
        vault.hashicorp.com/agent-inject-secret-helloworld: "secret/helloworld"
        vault.hashicorp.com/role: "myapp"

I can able to view the secrets are mounted inside properly.

In order to check my HA, i thought to drain a particular node and check it.
kubectl drain ip-xxx.ec2.internal --ignore-daemonsets --delete-local-data

Post then my application went to Init status

NAME                   READY   STATUS     RESTARTS   AGE  
app-864b96c9b6-b7lxw   0/2     Init:0/1   0          41h

I tried to bring back the node using uncordon command but still my application pod showing the same status.
Please let me know what might be the problem here and how i can attain maximum HA say if a particular zone itself gone and cause these type of problem

Fetching Vault minted PKI certificates

Thanks for open sourcing the sidecar!

Currently, looking at the code, this Vault sidecar agent is built to support only fetch of secrets from a Vault server and it cannot be used to trigger the generation of a new pki certificate (https://www.vaultproject.io/docs/secrets/pki/index.html) and fetch it right? Cause the only place I see certificate generation is self signed certificates in: https://github.com/hashicorp/vault-k8s/blob/master/helper/cert/source_gen.go#L117-L160.

The reason I ask for this is, the use cause is to use the sidecar to ask Vault to generate a new pki cert and use that cert with shared volume with other containers in a Pod. That way, short livedpki certs feature of Vault is directly supported by this agent and they can be refreshed easily via the sidecar.

Is this feature supported or is it planning to be developed anytime in the future timelines? Please let me know. Thanks!

Injection secrets as ENV vars?

Hello and thanks for the great tool, now we can use official one instead of homegrown/3rd party tools :)

In reality, most services expect secrets as ENV vars since we all started from general k8s secrets :) and now they can avoid of making them self Vault aware but still they need to add logic to pick up secrets from FS which makes it especially challenging if you have hundreds of different services/teams and you expect them to add this functionality first...
Injection secrets with 3rd party tools via sidecars wasn't helpful here because we have only 1 way to pass the secret to the app via shared volume.

So maybe with more native k8s integration we could have a chance to inject secrets as ENV vars?

Support Revoking Vault Token on Pod Termination

This is related to hashicorp/vault#6492

Currently, in my (manual) sidecars for Vault Agents, I add a preStop hook to revoke the Vault token on pod termination.

        lifecycle:
          preStop:
            exec:
              command:
              - /bin/sh
              - -c
              - /bin/sleep 10 && /bin/vault token revoke -self

This example requires that the Vault token be written to $HOME/.vault-token too.

Secrets rendered to volumes with serialization data

I've got everything seemingly working well, the injector is injecting but when looking at the data in container I get something of the following form:

# cat /vault/secret/my-kube-secret
data: map[password:SuperSecretPassword!]
metadata: map[created_time:2020-02-07T23:07:30.810474124Z deletion_time: destroyed:false version:1]

This is when using the default template as well as when using a custom template. Per the docs here there should be no such serialization data being rendered.

The server is Vault 1.3.2 and the issue happens with both Vault 1.3.1 and 1.3.2 for the injector.

Am I misunderstanding the template documentation in some fashion?

Mechanism for injecting vault-ca.crt

Very excited for this vault/k8s integration!

In our setup we have vault installed within its own, "vault", namespace. It has a cluster-readable ConfigMap ("cm/vault-ca") that contains ".data.ca.crt". Currently, any Deployements/Jobs that would like to fetch a secret from vault GET "cm/vault-ca" from the k8s API Server and then use that to to TLS validation when communicating with vault.

With the injector I see that there is a vault.hashicorp.com/tls-secret annotation that will mount a Secret containing TLS certs (and that mount point can be referenced by vault.hashicorp.com/ca-cert), but since our "vault-ca" ConfigMap isn't a Secret (and, more importantly, is in a different namespace) we can't successfully mount it. We could try to duplicate "cm/vault-ca"s contents into Secrets in each Namespace, but then we'd have to worry about keeping them in sync and about how to protect them from tampering.

Since we've already installed the injector alongside vault in the "vault" namespace and since it's already modifying Pods as they're created, it seems like it would be able to actually insert the correct CA into the vault-agent init containers/sidecars (much like it already does with the $VAULT_CONFIG environment variable).

Injector: External Vault Support

I would like the injector to reach out to an externally hosted vault that is not in the same k8s cluster. Would it be within the scope of this project to provide support for this?

Currently I see the main limitation is the Kubernetes auth method requirement.

Please add annotation for auth path

In my opinion, a single vault cluster for multiple k8s clusters will be a common use case.

Please add an annotation for auth path. For example, if I have setup auth for 2 clusters like this:

vault auth enable -path=cl1 kubernetes
vault auth enable -path=cl2 kubernetes

...then the auth does not work because it always uses path auth/kubernetes and I need it to use auth/cl1 or auth/cl2.

I have seen that it is possible to do this with a config map, but doing it that way removes the benefit of the simplicity provided by this project.

Error when using templates

I'm getting an error when using templates with the injector, I cannot find any documentation to enlighten me...
When injecting directly through annotations (like so: "vault.hashicorp.com/agent-inject-secret-demo: "demo/secret") I get no error, but when using a template (both in annotation or configmap) I get this error:

Error loading configuration from /vault/configs/config-init.hcl: At 22:44: expected '/' for comment

Here is my the configmap:

apiVersion: v1
kind: ConfigMap
metadata:
  name: my-configmap
  labels:
    app: hello
data:
  config.hcl: |
    "auto_auth" = {
      "method" = {
        "config" = {
          "role" = "demo"
        }
        "type" = "kubernetes"
      }

      "sink" = {
        "config" = {
          "path" = "/vault/.token"
        }

        "type" = "file"
      }
    }

    "exit_after_auth" = false
    "pid_file" = "/vault/.pid"

    "template" = {
      "contents" = "demo/secret"
      "destination" = "/vault/secrets/demo"
    }

    "vault" = {
      "address" = "http://vault:8200"
    }
  config-init.hcl: |
    "auto_auth" = {
      "method" = {
        "config" = {
          "role" = "demo"
        }
        "type" = "kubernetes"
      }

      "sink" = {
        "config" = {
          "path" = "/vault/.token"
        }

        "type" = "file"
      }
    }

    "exit_after_auth" = true
    "pid_file" = "/vault/.pid"

    "template" = {
      "contents" = "{{ secret "demo/secret" }}"
      "destination" = "/vault/secrets/demo"
    }

    "vault" = {
      "address" = "http://vault:8200"
    }

I also get the same error when doing something like this:

{{- with secret "demo/secret" -}}{{ .Data.key }}{{- end }}

Any suggestions ?

Rejects admission for system namespaces

The check for system namespaces rejects the admission request, even though the annotations are not present, because the namespace check occurs before the shouldInject check. This results in system components being unable to deploy.

h.Log.Debug("checking namespaces..")
if strutil.StrListContains(kubeSystemNamespaces, req.Namespace) {
err := fmt.Errorf("error with request namespace: cannot inject into system namespaces: %s", req.Namespace)
return admissionError(err)
}

(the shouldInject check should happen prior to the namespace check)

Put secrets into kubernetes Kind: Secret

Hi, i use vault for store certificate for https, tls and other. In my case i want attach secret into kind: secret or configmaps for using in ingress. How i can do that?

User vault-injector cannot patch resource "mutatingwebhookconfigurations" in API group "admissionregistration.k8s.io" at the cluster scope

I am trying to deploy the vault injector locally. I did 2 changes:

  • I use HTTP so I replaced all HTTPS by HTTP on yaml files under develop, and 443 ports by 80.
  • I use the "default" namespace instead of the "vault" namespace.

All I did afterwards was modify the vault url by my vault's url. And used kubectl to create objects described in the 4 injector yaml files under develop.

This is the output of the vault-injector-XXXXXXX pod

Listening on ":8080"...
2020-01-03T13:50:17.146Z [INFO]  handler: Starting handler..
Updated certificate bundle received. Updating certs...
Error updating MutatingWebhookConfiguration: mutatingwebhookconfigurations.admissionregistration.k8s.io "vault-agent-injector-cfg" is forbidden: User "system:serviceaccount:default:vault-injector" cannot patch resource "mutatingwebhookconfigurations" in API group "admissionregistration.k8s.io" at the cluster scope
Error updating MutatingWebhookConfiguration: mutatingwebhookconfigurations.admissionregistration.k8s.io "vault-agent-injector-cfg" is forbidden: User "system:serviceaccount:default:vault-injector" cannot patch resource "mutatingwebhookconfigurations" in API group "admissionregistration.k8s.io" at the cluster scope
Error updating MutatingWebhookConfiguration: mutatingwebhookconfigurations.admissionregistration.k8s.io "vault-agent-injector-cfg" is forbidden: User "system:serviceaccount:default:vault-injector" cannot patch resource "mutatingwebhookconfigurations" in API group "admissionregistration.k8s.io" at the cluster scope

Anyone knows where this permission problem comes from?

allowPrivilegeEscalation required

We make use of Pod Security Policies. In most cases we do not allow privilege escalation (for obvious reasons). However, when deploying an application with vault agent injection the following error is output and the pod fails to start:

Pods "vault-k8s-agent-webhook-demo-78d6d5dc65-hd7h5" is forbidden: unable to validate against any pod security policy: [spec.containers[0].securityContext.allowPrivilegeEscalation: Invalid value: true: Allowing privilege escalation for containers is not allowed]; Deployment does not have minimum availability.

I updated one of the security policies we have applied to the particular namespace where I was testing as follows and then the pod launched normally with both vault-agent-init and vault-agent containers and the app was able to access secrets declared in the templates:

allowPrivilegeEscalation: true

Obviously I would prefer NOT to allow privilege escalation (I'm certain my CISO colleagues will be uncomfortable with that).

Is it possible not to do so, or will you consider a feature request to remove that need ?

Failing that, can you explain why it is needed and perhaps suggest what I can say to my CISO about the level of risk ?

Kind Regards

Fraser.

Auto_auth configuration fails with non-default path

Injector generates vault config with

{
 "auto_auth": {
    "method": {
      "type": "kubernetes",
      "config": {
        "role": "demoservice-role"
      }
    }
...
}

Authentication fails when kubernetes plugin was mounted on anything else than "kubernetes" path.
There seem to be no option to define path on which kubernetes auth plugin was mounted.

Kindly requesting that configuration option as we have more than 1 kubernetes environment connected to vault installation.

Thanks,

Siim

Add support for token only

There are use cases where a valid token could be shared with other containers in the pod. I think an annotation could be added to support this use case.

vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-token-only: "true"
vault.hashicorp.com/role: "myrole"

vs

vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-token: "auth/token/lookup-self"
vault.hashicorp.com/agent-inject-template-token: |
    {{- with secret "auth/token/lookup-self" -}}
        {{.Data.id}}
     {{- end }}
 vault.hashicorp.com/role: "myrole"

Multi Cluster K8S environment: App and Vault are not on same cluster; Demo app is not fetching secrets. Code 500

Cluster A = Consul + Vault + Vault Injector
Cluster B = Vault Injector communicated with Vault installed in Cluster A

I have consul+vault installed on one Kubernetes cluster. On the other cluster, the vault-k8s injector has been installed successfully. - https://github.com/hashicorp/vault-k8s.git

Init pod returns the following errors (Error making API request. Code 500). The vault address has been changed to http://vlt.consulvault.172.31.101.63.xip.io. Both the clusters are in the same network and curl command returns the response.
I think I may have to pass the root token(or register secret with Vault) in order to authenticate.

To authenticate, I have applied the configuration as follows, but I don't know how I can point injector(cluster B) from http://vlt.consulvault.172.31.101.63.xip.io/v1/auth/kubernetes/login to http://vlt.consulvault.172.31.101.63.xip.io/v1/auth/kube-cluster-A

vault write auth/kube-cluster-A/config
token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"
kubernetes_host=https://${KUBERNETES_PORT_443_TCP_ADDR}:443
kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt

Is there any way to assign the path to Vault injector on cluster B?

Thank you

curl \
>     -H "X-Vault-Token: token" \
>     -X GET \
>     http://vlt.consulvault.172.31.101.63.xip.io/v1/secret/helloworld
{"request_id":"***","lease_id":"","renewable":false,"lease_duration":2764800,"data":{"password":"foobarbazpass","username":"foobaruser"},"wrap_info":null,"warnings":null,"auth":null}


==> Vault server started! Log data will stream in below:
2020-01-28T18:27:10.042Z [INFO] sink.file: creating file sink
2020-01-28T18:27:10.042Z [INFO] sink.file: file sink configured: path=/home/vault/.token mode=-rw-r-----
==> Vault agent configuration:
Cgo: disabled
Log Level: info
Version: Vault v1.3.1
2020-01-28T18:27:10.042Z [INFO] auth.handler: starting auth handler
2020-01-28T18:27:10.042Z [INFO] auth.handler: authenticating
2020-01-28T18:27:10.042Z [INFO] template.server: starting template server
2020/01/28 18:27:10.042917 [INFO] (runner) creating new runner (dry: false, once: false)
2020/01/28 18:27:10.043280 [INFO] (runner) creating watcher
2020-01-28T18:27:10.043Z [INFO] sink.server: starting sink server
2020-01-28T18:27:10.275Z [ERROR] auth.handler: error authenticating: error="Error making API request.
URL: PUT http://vlt.consulvault.172.31.101.63.xip.io/v1/auth/kubernetes/login
Code: 500. Errors:

* lookup failed: [invalid bearer token, square/go-jose: error in cryptographic primitive]" backoff=2.382848811
2020-01-28T20:21:34.792Z [INFO] auth.handler: authenticating
2020-01-28T20:21:34.806Z [ERROR] auth.handler: error authenticating: error="Error making API request.
URL: PUT http://vlt.consulvault.172.31.101.63.xip.io/v1/auth/kubernetes/login
Code: 500. Errors:
* lookup failed: [invalid bearer token, square/go-jose: error in cryptographic primitive]" backoff=1.805414534
2020-01-28T20:21:36.612Z [INFO] auth.handler: authenticating
2020-01-28T20:21:36.758Z [ERROR] auth.handler: error authenticating: error="Put http://vlt.consulvault.172.31.101.63.xip.io/v1/auth/kubernetes/login: dial tcp: lookup vlt.consulvault.172.31.101.63.xip.io on 10.43.0.10:53: no such host" backoff=1.998397118
2020-01-28T20:21:38.757Z [INFO] auth.handler: authenticating
2020-01-28T20:21:38.761Z [ERROR] auth.handler: error authenticating: error="Put http://vlt.consulvault.172.31.101.63.xip.io/v1/auth/kubernetes/login: dial tcp: lookup vlt.consulvault.172.31.101.63.xip.io on 10.43.0.10:53: no such host" backoff=2.780268301
2020-01-28T20:21:41.541Z [INFO] auth.handler: authenticating

Vault Injector YAML

---
# Source: vault/templates/injector-deployment.yaml
# Deployment for the injector
apiVersion: apps/v1
kind: Deployment
metadata:
  name: vault-agent-injector
  namespace: consulvault
  labels:
    app.kubernetes.io/name: vault-agent-injector
    app.kubernetes.io/instance: vault
    app.kubernetes.io/managed-by: Helm
    component: webhook
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: vault-agent-injector
      app.kubernetes.io/instance: vault
      component: webhook
  template:
    metadata:
      labels:
        app.kubernetes.io/name: vault-agent-injector
        app.kubernetes.io/instance: vault
        component: webhook
    spec:
      serviceAccountName: "vault-agent-injector"
      securityContext:
        runAsNonRoot: true
        runAsGroup: 1000
        runAsUser: 100
      containers:
        - name: sidecar-injector

          image: "hashicorp/vault-k8s:0.1.0"
          imagePullPolicy: "IfNotPresent"
          env:
            - name: AGENT_INJECT_LISTEN
              value: ":8080"
            - name: AGENT_INJECT_LOG_LEVEL
              value: info
            - name: AGENT_INJECT_VAULT_ADDR
              value: http://vlt.consulvault.172.31.101.63.xip.io
            - name: AGENT_INJECT_VAULT_IMAGE
              value: "vault:1.3.1"
            - name: AGENT_INJECT_TLS_AUTO
              value: vault-agent-injector-cfg
            - name: AGENT_INJECT_TLS_AUTO_HOSTS
              value: vault-agent-injector-svc,vault-agent-injector-svc.consulvault,vault-agent-injector-svc.consulvault.svc
          args:
            - agent-inject
            - 2>&1
          livenessProbe:
            httpGet:
              path: /health/ready
              port: 8080
              scheme: HTTPS
            failureThreshold: 2
            initialDelaySeconds: 1
            periodSeconds: 2
            successThreshold: 1
            timeoutSeconds: 5
          readinessProbe:
            httpGet:
              path: /health/ready
              port: 8080
              scheme: HTTPS
            failureThreshold: 2
            initialDelaySeconds: 2
            periodSeconds: 2
            successThreshold: 1
            timeoutSeconds: 5
---

EKS/Weave Net CNI webhook issue

We're running on EKS and using the Weave Net CNI provider. Ran into an issue where the webhook couldn't reach the service. Cloudwatch showed this error:

Failed calling webhook, failing open vault.hashicorp.com: failed calling webhook "vault.hashicorp.com": Post https://vault-injector-svc.vault.svc:443/mutate?timeout=30s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

Turns out it has to do with the webhooks running on the managed master and not respecting the CNI provider on the nodes. I had to set the deployment to have hostNetwork: true.

Hopefully this saves someone else from some pain.

Agent-Inject init-container cannot start in OpenShift due to the SecurityContext

Describe the bug
When using vault-k8s on OpenShift to inject an init-container into a pod, the pod is not able start due to the security context.
The following error happens:
Error creating: pods "testclient-1-xxxxx" is forbidden: unable to validate against any pod security policy: [spec.initContainers[1].securityContext.securityContext.runAsUser: Invalid value: 100: must be in the ranges: [1000250000, 1000259999]

To Reproduce
Install vault-k8s on OpenShift and annotate a pod with the "vault.hashicorp.com/agent-inject**" annotations to get a secret from vault. The annotated pod is not able to start due to the security context error from above.

Expected behavior
The injected init-container must be able to start.

Environment

  • Vault-k8s Version: 0.1.2
  • Vault Version: 1.3.1
  • Server Operating System/Architecture: RedHat OpenShift Container Platform 3.11

Suggestion

https://github.com/hashicorp/vault-k8s/blob/master/agent-inject/agent/container_init_sidecar.go#L62

  • Make the agent aware of the system it is running in and automatically add or remove the security context.
  • Or make the security context configurable and add a flag to the helm chart (https://github.com/hashicorp/vault-helm)

Support for periodic updating of secrets

According to Agent Sidecar Injector docs, "The init container will prepopulate the shared memory volume with the requested secrets prior to the other containers starting. The sidecar container will continue to authenticate and render secrets to the same location as the pod runs."

This leads me to believe the sidecar will automatically update the shared mount with the result of new or changed passwords from the source Vault as they change over time. However, this has not proven to be the case in my setup, and I can find no documentation to suggest there is a configuration for setting a periodic password refresh (e.g., a setting to re-sync all passwords every 15 minutes.)

Is this possible now, or on the roadmap? I can delete pods and have the shared mount repopulated with updated password values (or just kill the vault-agent container to the same effect) but I would prefer to take advantage of a configurable value handled cleanly by the sidecar if there was one.

Allow Setting Vault Namespace through Annotation

In order to set a namespace today, you need to do so through the secret annotation setting:

{{- with secret "`ns1/ns2/secret/foo" -}}
        {{.Data.password}}
{{- end }}

Consider adding an annotation that does this explicitly for the agent for overriding VAULT_NAMESPACE such as:

vault.hashicorp.com/namespace: "ns1"

Then you would only need to write {{- with secret "secret/foo" -}} and ns1 would be added to the path.

Doc: Injected secrets format is in the form of a Go struct

In the documentation it is mentionned:

If no template is provided the following generic template is used:

{{ with secret "/path/to/secret" }}
    {{ range $k, $v := .Data }}
        {{ $k }}: {{ $v }}
    {{ end }}
{{ end }}

The rendered secret would look like this within the container:

$ cat /vault/secrets/foo
password: A1a-BUEuQR52oAqPrP1J
username: v-kubernet-pg-app-q0Z7WPfVNqqTJuoDqCTY-1576529094

However this is not really the case in my current tests:

kubectl exec nginx-test-87b7c7746-5cch5 -c nginx-test cat /vault/secrets/helloworld3
data: map[password:cncf username:smana]
metadata: map[created_time:2019-12-21T08:55:22.108925198Z deletion_time: destroyed:false version:1]

Maybe I missed something ?

What happens if Vault goes down?

What happens if Vault goes down or is otherwise unavailable?

Will K8S still be able to spin up new pods?

Or is it stuck until Vault becomes available again?

Support for Vault Namespace

The enterprise vault supports vault namespaces, but it seems that none of the annotations support it.

Something like

vault.hashicorp.com/namespace: "ns2/secret/foo"

It seems that the only way to do so is by mounting my own configuration files using configmap.

Perhaps I missed something, please advise. Thanks!

Timeout errors in MutatingWebhookConfiguration

In my setup I have Vault server running in one cluster and the Vault-injector in another cluster. I have used the manifest files in /deploy to install the vault-injector.

When a pod is scheduled the MutatingWebhookConfiguration throws timeout errors in a few different flavors and there is nothing to see in the logs for vault-injector.

0s    Warning   FailedCreate   ReplicaSet   Error creating: Internal error occurred: failed calling webhook "vault.hashicorp.com": Post https://vault-agent-injector-svc.vault.svc:443/mutate?timeout=30s: context deadline exceeded
0s    Warning   FailedCreate   ReplicaSet   Error creating: Internal error occurred: failed calling webhook "vault.hashicorp.com": Post https://vault-agent-injector-svc.vault.svc:443/mutate?timeout=30s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)

Deployment variables

 - name: AGENT_INJECT_LISTEN
              value: ":8080"
            - name: AGENT_INJECT_LOG_LEVEL
              value: "debug"
            - name: AGENT_INJECT_VAULT_ADDR
              value: "https://vault.domain<removed>"
            - name: AGENT_INJECT_VAULT_IMAGE
              value: "vault:1.3.1"
            - name: AGENT_INJECT_TLS_AUTO
              value: vault-agent-injector-cfg
            - name: AGENT_INJECT_TLS_AUTO_HOSTS
              value: "vault-agent-injector-svc,vault-agent-injector-svc.$(NAMESPACE),vault-agent-injector-svc.$(NAMESPACE).svc"

Pod logs

2020-01-13T14:38:08.414Z [INFO]  handler: Starting handler..
Listening on ":8080"...
Updated certificate bundle received. Updating certs...

I can call the vault-injector k8s service directly so it seems to be running

I would appreciate if you can point me in the right direction:)
Thanks

Monitoring and reconciliation

The injector is important for Pods with annotations vault.hashicorp.com/.... To make the injector a less critical component for the cluster, the FailurePolicy for the webhook should be set to Ignore (which is the case in the Helm deployment).

If the injector is unavailable, pods which need the agent will be created but probably fail to run properly. Liveness and readiness probes will not help in this case -- they do not recreate pods. Without looking closely at the resulting pod spec, the only indication for the cause is a log line from the api-server. Metrics are only available for webhooks with a FailurePolicy set to Fail.

This issue is to discuss approaches to monitor and/or re-conciliate unavailabilities of the webhook.
Here is one approach:

  • Continuously check for Pods which have the annotation vault.hashicorp.com/agent-inject: "true" but are missing vault.hashicorp.com/agent-inject-status: injected and expose a metric for them (e.g. to trigger alerts)
  • Optionally have the injector delete those pods to trigger a recreation

I'm sure there are many other approaches. Would be interested to hear them!

Support for windows kubernetes workloads

We use kubernetes (AWS EKS) to manage both linux and windows workloads.
We are now looking into using vault-k8s injector to inject secrets into the pods. All of the tutorials seem to talk about linux workloads. Are windows workloads supported at this point ?

Injector with External vault service

Hello I am testing the integration with and found #15 which helped with a few things but I hit a wall and am struggling to find where to go from here do to lack of information.

So I updated the Vault URL in the injector-deployment.yaml file as suggested and I see my injector attempting to connect. After deploying all the files. I am also updating the auth path as mine is custom.

This is all deployed in namespace called vault

I the integrate with the vault cluster using the following.

k8s_host="$(kubectl config view --minify | grep server | cut -f 2- -d ":" | tr -d " ")"
k8s_cacert="$(kubectl config view --raw --minify --flatten -o jsonpath='{.clusters[].cluster.certificate-authority-data}' | base64 --decode)"
secret_name="$(kubectl get serviceaccount vault-injector -n vault -o go-template='{{ (index .secrets 0).name }}')"
tr_account_token="$(kubectl get secret ${secret_name} -n vault -o go-template='{{ .data.token }}' | base64 --decode)"

Then add to vault
vault write auth/cluster-01/config token_reviewer_jwt="${tr_account_token}" kubernetes_host="${k8s_host}" kubernetes_ca_cert="${k8s_cacert}"

Create a vault role:
vault write auth/cluster-01/role/myapp bound_service_account_names=app bound_service_account_namespaces="*" policies=app ttl=1h
Below is the app config I am using in the test namespace

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app
  labels:
    app: vault-agent-demo
spec:
  selector:
    matchLabels:
      app: vault-agent-demo
  replicas: 1
  template:
    metadata:
      annotations:
        vault.hashicorp.com/agent-inject: "true"
        vault.hashicorp.com/agent-inject-secret-helloworld: "secret/helloworld"
        vault.hashicorp.com/role: "myapp"
        vault.hashicorp.com/auth-path: "/auth/cluster-01"
      labels:
        app: vault-agent-demo
    spec:
      serviceAccountName: app
      containers:
        - name: app
          image: nginxdemos/hello
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: app
  labels:
    app: vault-agent-demo

Then I get these logs in the pod

2020-02-04T18:20:32.811Z [ERROR] auth.handler: error authenticating: error="Error making API request.
URL: PUT https://uri.company.com/v1/auth/cluster-01/login
Code: 403. Errors:

  • permission denied" backoff=2.0216039

Any ideas?

Curl returns the same response :(

Injector does not inject sidecar container

Hello,
I'm trying to deploy vault with sidecar injector. I'm using this chart: https://github.com/hashicorp/vault-helm
and following this manual: https://www.hashicorp.com/blog/injecting-vault-secrets-into-kubernetes-pods-via-a-sidecar/
the only difference is that I don't use dev server mode.

Everything works fine except the injector. When I deploy an app with injector annotations, then pod starts like usual with one container and with mounted app-token secret, but there is no secondary injector container:

app-57d4f4c645-9npng
Namespace:      my-namespace
Priority:       0
Node:           node
Start Time:     Mon, 06 Jan 2020 16:19:21 +0100
Labels:         app=vault-agent-demo
                pod-template-hash=57d4f4c645
Annotations:    vault.hashicorp.com/agent-inject: true
                vault.hashicorp.com/agent-inject-secret-test: secret/data/test-secret
                vault.hashicorp.com/role: test
Status:         Running
IP:             xxxxxx
IPs:            <none>
Controlled By:  ReplicaSet/app-57d4f4c645
Containers:
  app:
    Container ID:   docker://7348a9d4a9c0c9a3d831d3f84fa078081dcc3648f469aa2b0195b55242d26613
    Image:          jweissig/app:0.0.1
    Image ID:       docker-pullable://jweissig/app@sha256:54e7159831602dd8ffd8b81e1d4534c664a73e88f3f340df9c637fc16a5cf0b7
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Mon, 06 Jan 2020 16:19:22 +0100
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from app-token-kmzkr (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  app-token-kmzkr:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  app-token-kmzkr
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>

There are no errors in logs from vault-agent-injector pod :

2020-01-06T13:55:55.369Z [INFO]  handler: Starting handler..
Listening on ":8080"...
Updated certificate bundle received. Updating certs...

Here is my deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app
  namespace: my-namespace
  labels:
    app: vault-agent-demo
spec:
  selector:
    matchLabels:
      app: vault-agent-demo
  replicas: 1
  template:
    metadata:
      annotations:
        vault.hashicorp.com/agent-inject: "true"
        vault.hashicorp.com/agent-inject-secret-test: "secret/data/test-secret"
        vault.hashicorp.com/role: "test"
      labels:
        app: vault-agent-demo
    spec:
      serviceAccountName: app
      containers:
      - name: app
        image: jweissig/app:0.0.1
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: test
  namespace: my-namespace
  labels:
    app: vault-agent-demo
apiVersion: flux.weave.works/v1beta1
kind: HelmRelease
metadata:
  name: vault
  namespace: my-namespace
  annotations:
    flux.weave.works/automated: 'true'
spec:
  chart:
    path: "."
    git: [email protected]:hashicorp/vault-helm.git
    ref: master
  releaseName: vault
  values:
    replicaCount: 1
    server:
      ingress:
        enabled: true
        annotations:
          ....... 
        hosts:
          .......
        tls:
          .......

Is there any way to debug this issue?

Vault init container can't connect fo vault: Connection refused.

Hi!

I was playing around with the secret injection that was released on 19th of December. I'm not yet sure if I have found a bug, or it’s just me doing something wrong:

The setup works, if the deployment is in the same namespace as vault

I went through the blog/tutorial and it all works. I have set up vault to be in a kubernetes namespace “vault”. (Installed with dev mode on true). Then the app was deployed in the same namespace. Everything works fine - the init container picks up the secret etc.

Connection refused, if application deployed to default namespace

Then I wanted to deploy the same demo-deployment to the default namespace. To make it work with the policy I have modified the “SA_namespace” from vault to default:

vault write auth/kubernetes/role/myapp \
   bound_service_account_names=app \
   bound_service_account_namespaces=default \
   policies=app \
   ttl=1h

Then set the context to use the default namespace by default and deployed the app. However the vault init contianer was not able to log in to vault:

[ERROR] auth.handler: error authenticating: error="Put http://vault.vault.svc:8200/v1/auth/kubernetes/login: dial tcp 10.74.11.52:8200: connect: connection refused" backoff=1.211726213

What I have investigated (in the default namespace)

  • I attached my terminal to the init-container. nslookup works, the IP and the port are correct. However I couldn't ping any service from the init-contianer. (Not even www.google.com etc.)
  • I have written a deployment myself, using the same vault:1.3.1 image as an init-container and passed the sleep 3600 as an argument to it. Then I attached the terminal to this container and from here I could log in to vault, get the token etc - so everything worked fine.
  • Needless to say, but I have did the same with a normal "debugging" pod, and successfully logged into vault, as described here

Need secrets as key and value pair available in pod environment directly

I am looking for a way to see if we can include all secrets which we got it from vault, in a container as a environment variable rather then using a file which contains key and value pair.

Is there any way, when i inject a variables from particular path of vault to container i can directly access those variables as key and value without referring to env block of deployment.

My use case is :
Current scenario : I have few micro services which i want to deploy in k8s. All micro services are using many variables which as of now i am supplying in env block as configmap and secrets of my deployment file. The problem is whenever i want to add or update any variable i need to update my deployment as well. (either configmap or secrets) : Developer won't have access to those variables.
Requirement : Now i started using vault and created app specific path and policies for all those variables which are needed for micro services to run then using vault-k8s module i am injecting those variables into my container which is pretty awesome but after injecting those values, again i have to refer those key into my deployment as env block because it inject all those variables into a mount file either developer write code to read that file in the start of service or if is there any way to inject all those key and value pair inside file, directly into my pod environment, In this case there is no need to refer keys into env block of my container. so i can update vault anytime and the variable will be available into pod without updating any deployment file.

kind: Pod
metadata:
  name: busybox
  namespace: default
  annotations:
    vault.hashicorp.com/agent-inject: "true"
    vault.hashicorp.com/agent-inject-secret-test: kv/devops/demo
    vault.hashicorp.com/role: "demo"
spec:
  serviceAccountName: vault-auth
  containers:
  - name: busybox
    image: busybox:1.28
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
    env:
         - name: "CONFIG_FILE"
           value: "/kv/devops/demo/test"

vault path kv/devops/demo contains many variables which are now available in my pod as /kv/devops/demo/test mount file. Any possible way that instead of using as file i can directly inject those keys and values in pod env without specifying as env block.

Add support for exposing cert metrics (minted / expiry time etc.)

It would be great if various metrics can be exposed by like an endpoint /metrics for a scraper like Prometheus to scrap off. This is useful to monitor for cert minted and expiry times and use that to warn / alert on if necessary.

Here is an example project which is follows a similar pattern of using sidecars to mint and mount certificates from vault: https://github.com/monzo/vault-sidekick/ and the commit: monzo/vault-sidekick@2e7365b exposes metrics in prometheus time series format.

If similar metric is exposed in the vault-k8s, it would be great since this side-car is directly developed from Hashicorp / Vault and would fit in well with other Hashicorp infrastructure.

imagePullPolicy for the injected agent image fails admission control

The following message is output when deploying a test application with the vault-k8s annotations:

Pods "vault-k8s-agent-webhook-demo-5b945c994b-g8xfn" is forbidden: spec.initContainers[0].imagePullPolicy: Unsupported value: "IfNotPresent": supported values: "Always"; Deployment does not have minimum availability.

We have admission controllers applied to the cluster in this order:

...,AlwaysPullImages,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,...

We have tried moving AlwaysPullImages to come AFTER MutatingAdmissionWebhook but that didn't help (same error).

Looking at injector-deployment.yaml I wonder whether it would be possible for you to expose the imagePullPolicy for the agent image as you do for the injector itself ? ..

      containers:
        - name: sidecar-injector
          {{ template "injector.resources" . }}
          image: "{{ .Values.injector.image.repository }}:{{ .Values.injector.image.tag }}"
          imagePullPolicy: "{{ .Values.injector.image.pullPolicy }}"
          env:
            - name: AGENT_INJECT_LISTEN
              value: ":8080"
            - name: AGENT_INJECT_LOG_LEVEL
              value: {{ .Values.injector.logLevel | default "info" }}
            - name: AGENT_INJECT_VAULT_ADDR
              value: {{ .Values.server.addr }}
            - name: AGENT_INJECT_VAULT_IMAGE
              value: "{{ .Values.injector.agentImage.repository }}:{{ .Values.injector.agentImage.tag }}"
            ...
            MAYBE SOMETHING LIKE THIS ???
            ...
            - name: AGENT_INJECT_VAULT_IMAGE_PULLPOLICY
              value: "{{ .Values.injector.agentImage.pullPolicy}}"

Regards

Fraser Goffin

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.