Comments (13)
I'm trying to do pretty much the same thing as you. Your issue may be DNS-related?
lookup vlt.consulvault.172.31.101.63.xip.io on 10.43.0.10:53: no such host
from vault-k8s.
I'm trying to do pretty much the same thing as you. Your issue may be DNS-related?
lookup vlt.consulvault.172.31.101.63.xip.io on 10.43.0.10:53: no such host
I saw that error in the log. I am trying to find a way to troubleshoot. From the node, I am able to communicate with both the clusters. If Cluster B can not send the request out, it means the CNI plugin is restricting it or some kind of firewall.
I will update here if I can find something useful.
Thank you for your observation.
from vault-k8s.
I have this working. A few things:
- I have two service accounts on the application cluster (Cluster B):
vault-auth
required for kubernetes-auth. It's in the application namespace (e.g.test
)vault-injector
to manage the webhooks, admissions controllers etc. It's in the vault namespace (e.g.vault
)
- I have kubernetes auth configured in the Vault cluster (Cluster A) at the default path (
/auth/kubernetes
). As per this comment using a different auth path requires setting a ConfigMap - looking at doing that next - Kubernetes auth is configured to use the
vault-auth
service account from Cluster B:
VAULT_SA_NAME=vault-auth
# Set VAULT_SA_SECRET to the service account you created earlier
export VAULT_SA_SECRET=$(kubectl -n test get sa $VAULT_SA_NAME -o jsonpath="{.secrets[*]['name']}")
echo "Account secret name is $VAULT_SA_SECRET"
# Set SA_JWT_TOKEN value to the service account JWT used to access the TokenReview API
export VAULT_SA_JWT_TOKEN=$(kubectl -n test get secret $VAULT_SA_SECRET -o jsonpath="{.data.token}" | base64 --decode; echo)
echo "JWT is $VAULT_SA_JWT_TOKEN"
# Set SA_CA_CRT to the PEM encoded CA cert used to talk to Kubernetes API
export VAULT_SA_CA_CRT=$(kubectl -n test get secret $VAULT_SA_SECRET -o jsonpath="{.data['ca\.crt']}" | base64 --decode; echo)
echo "Cert is $VAULT_SA_CA_CRT"
# Set K8S_CONTEXT to name of current context
export K8S_CONTEXT=$(kubectl config current-context)
echo "Context is $K8S_CONTEXT"
# Set K8S_HOST to server of current context
export K8S_HOST=$(kubectl config view -o jsonpath="{.clusters[?(@.name == \"$K8S_CONTEXT\")].cluster.server}"; echo)
echo "Host is $K8S_HOST"
# Enable Kubernetes Auth
vault auth enable --path kubernetes kubernetes
# Tell Vault how to communicate with the cluster
vault write auth/kubernetes/config \
token_reviewer_jwt="$VAULT_SA_JWT_TOKEN" \
kubernetes_host="$K8S_HOST" \
kubernetes_ca_cert="$VAULT_SA_CA_CRT"
There's no mention in the docs of still requiring a vault-auth
service account for Kubernetes auth, but I'm not sure how it's meant to work otherwise. Perhaps someone else can confirm/deny that it's required?
Edit: Turns out the vault-auth
can be in any namespace (as you'd expected from a ClusterRole), not sure what that wasn't working for me before.
from vault-k8s.
I have this working. A few things:
I have two service accounts on the application cluster (Cluster B):
vault-auth
required for kubernetes-auth. It's in the application namespace (e.g.test
)vault-injector
to manage the webhooks, admissions controllers etc. It's in the vault namespace (e.g.vault
)I have kubernetes auth configured in the Vault cluster (Cluster A) at the default path (
/auth/kubernetes
). As per this comment using a different auth path requires setting a ConfigMap - looking at doing that nextKubernetes auth is configured to use the
vault-auth
service account from Cluster B:
There's no mention in the docs of still requiring a
vault-auth
service account for Kubernetes auth, but I'm not sure how it's meant to work otherwise. Perhaps someone else can confirm/deny that it's required?
It's just an assumption. At some point, this information needs to verify with the Vault cluster. This would help the Vault cluster to determine the correct K8s cluster.
vault write auth/kubernetes/config
token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"
kubernetes_host=https://${KUBERNETES_PORT_443_TCP_ADDR}:443
kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
Technically, we would not able to use annotation way comment unless the vault cluster and application are on the same k8s cluster
As you have mentioned, we still have to mount custom config files SideCar sh. I am looking for a way to define the path in the config file. The official document shows how to define the path with "vault auth" and "vault login" commands.
from vault-k8s.
I followed @stevegore's and deployed the vault injector only in the cluster with the app pods. Then I used the SA in that cluster to configure the vault auth in the other cluster.
I'm getting a different error in GKE:
URL: PUT https://<my_server>/v1/auth/kubernetes/login
Code: 500. Errors:
* Post https://kubernetes.default.svc/apis/authentication.k8s.io/v1/tokenreviews: x509: certificate signed by unknown authority" backoff=1.734589217
do I need something else on the injector-only cluster to use the tls enabled vault server in the other cluster?
I noticed I could add this annotation to my app pods but I'm not sure where this kubernetes resource need to exist:
vault.hashicorp.com/tls-secret: ""
Looking at the deploy files for the injector-only configuration this secret is not created: https://github.com/hashicorp/vault-k8s/tree/master/deploy
from vault-k8s.
It looks to me that your Vault cluster isn't able to talk to your app cluster, which would indicate that your Kubernetes Auth isn't fully configured. This line here is meant to store the certificate in Vault config:
vault write auth/kubernetes/config \
token_reviewer_jwt="$VAULT_SA_JWT_TOKEN" \
kubernetes_host="$K8S_HOST" \
kubernetes_ca_cert="$VAULT_SA_CA_CRT"
I had this little script to check K8s Auth from my laptop. Note I've configured the k8s auth endpoint at auth/kubernetes/xxx:
#!/bin/bash
set -e
VAULT_SA_NAME=temp
if ! kubectl get sa | grep $VAULT_SA_NAME; then
kubectl -n default create sa $VAULT_SA_NAME
fi
# Set VAULT_SA_SECRET to the secret containing service account credentials
VAULT_SA_SECRET=$(kubectl -n default get sa $VAULT_SA_NAME -o jsonpath="{.secrets[*]['name']}")
echo "Account secret name is $VAULT_SA_SECRET"
# Set SA_JWT_TOKEN value to the the JWT we will validate
VAULT_SA_JWT_TOKEN=$(kubectl -n default get secret $VAULT_SA_SECRET -o jsonpath="{.data.token}" | base64 --decode; echo)
echo "JWT is $VAULT_SA_JWT_TOKEN"
# Set K8S_CONTEXT to name of current context
K8S_CONTEXT=$(kubectl config current-context)
echo "Context is $K8S_CONTEXT"
curl \
--request POST \
--data "{\"jwt\": \"$VAULT_SA_JWT_TOKEN\", \"role\": \"test\"}" \
-s https://vault.q-ctrl.com/v1/auth/kubernetes/$K8S_CONTEXT/login | jq
from vault-k8s.
Thanks @stevegore for the script. I confirmed the kubernetes auth config is right by reading from the vault cli
vault read auth/kubernetes/config
I can see that the kubernetes_ca_cert
is there (and refer to the secret in the app cluster) and the kubernetes_host
is set properly.
I modified your script a little to account for my auth endpoint, namespace and role, but the curl also returns the same error
"errors": [
"Post https://<my host>/apis/authentication.k8s.io/v1/tokenreviews: x509: certificate signed by unknown authority"
]
}
If I configure the kubernetes auth from the cluster that has vault installed, I'm able to use it. But as soon as I change to use the app cluster's service account that's when it starts failing. I'm still waiting on the official documentation from Hashicorp on how to do the dual cluster integration, because maybe there was something missed there...
from vault-k8s.
Have you tried calling that endpoint directly to see what certificate you're getting? FYI the equivalent call to the /tokenreviews
endpoint would look something like this:
curl -iv --location --request POST 'https://<my host>/apis/authentication.k8s.io/v1/tokenreviews' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer insertjwtfromserviceaccounthere' \
--data-raw '{
"kind": "TokenReview",
"apiVersion": "authentication.k8s.io/v1",
"metadata":{
"name": "sample"
},
"spec": {
"token": "inserttokentovalidatehere"
}
}'
But I agree that more documentation would be great.
from vault-k8s.
I think I know what was not working, I was setting the kubernetes auth mechanism in Vault with the kubernetes_host="$K8S_HOST"
pointing to the k8 master servers in the cluster that has vault, and not the app cluster - I cannot get my K8S_HOST from the kubectl config view
like you did because I'm running in GKE, and the control plane is managed by Google. I can connect to pods and they have an env var KUBERNETES_SERVICE_HOST
that point to their control plane.
I had read here that a FW needed to be open for the webhook #32 (comment) (I have done this step already) so maybe something similar needs to be open from the vault pods --> k8s masters in the other cluster? I haven't seen anyone mention this port 443 needed to be opened between the two clusters, but that will be my next attempt. I checked every doc referencing the kubernetes auth setup, and it is very light with regards to multi cluster setups, most just say kubernetes_host="$K8S_HOST"
without specifying what that means.
In summary:
- When setting the cluster with vault in the
kubernetes_host
I gettokenreviews: x509: certificate signed by unknown authority
- When setting the cluster with the apps (and vault-injector only) in the
kubernetes_host
I get an I/O timeout. I'm hoping this needs a firewall rule change.
from vault-k8s.
Interesting. FWIW I'm also running this on GKE. This is still in early stages so we haven't yet hardened our cluster with private IPs for the Kubernetes API, which could be where things differ. Our API has a public IP with no firewall restrictions, just authentication.
Not sure if this is helpful to you, but if you using GKE, you can also go to the console to get the Endpoint and cluster CA certificate.
from vault-k8s.
I'm seeing this same issue with my remote cluster. How'd you get around it?
x509: certificate signed by unknown authority"
from vault-k8s.
I'm also using GKE private cluster. Can you please elaborate about the steps that you have did for setting up another cluster (app cluster) to talk with Vault?
I have set it up the app cluster k8s auth in Vault using following commands
export KUBE_CA_CERT=kubectl config view --raw --minify --flatten -o jsonpath='{.clusters[].cluster.certificate-authority-data}' | base64 --decode
export KUBE_HOST=kubectl config view --raw --minify --flatten -o jsonpath='{.clusters[].cluster.server}'
vault write auth/gke2/config
token_reviewer_jwt="$SA_JWT_TOKEN"
kubernetes_host="$KUBE_HOST"
kubernetes_ca_cert="$KUBE_CA_CERT";
Used the vault annotation for app pod --> vault.hashicorp.com/auth-path: "auth/gke2"
Now I'm getting the below issue.
2021-06-16T05:26:01.712Z [INFO] auth.handler: authenticating
2021-06-16T05:27:01.713Z [ERROR] auth.handler: error authenticating: error="context deadline exceeded" backoff=2m34.65s
Can you please provide some pointers on how to fix it?
from vault-k8s.
I am in AWS with an EKS cluster for devwebapp-with-annotations and a K3s single-node cluster on an EC2 instance running vault server. I was seeing same error as @msenmurugan, but then noticed that the devwebapp-with-annotations POD has an istio sidecar, and found in #41 to annotate it with vault.hashicorp.com/agent-init-first: "true"
. That resolved the auth.handler
error for me.
from vault-k8s.
Related Issues (20)
- Injector sidecar is working for inject Pod manifest but Deployment manifest doesn't work HOT 1
- Agent injector should set a maxSize for its tmpfs mount
- Vault agent overwrites kubernetes managedFields
- Allow configuration of the init/sidecar container names globally HOT 1
- Injected config tries to use IRSA token instead of the k8s service account token
- Webhook tries to add initContainer during UPDATE HOT 4
- Stuned deleting of a pod whose parents are job.
- vault.hashicorp.com/agent-init-first does not work with init containers coming from annotations
- Azure authentication method doesn't work with federated token
- Support for an agent-image built FROM scratch
- Auth config block can support common arguments from env and flags
- Tokens not revoked on Vault Agent Shutdown created via a Job using the /agent/v1/quit endpoint HOT 3
- Pipeline Request: Rebuild Dockerhub Image HOT 1
- Support for a securityContext.seccompProfile configuration HOT 1
- Support vault secret inject while the main pod "automountServiceAccountToken" set false HOT 1
- [controller-runtime] log.SetLogger(...) was never called; logs will not be displayed. HOT 1
- Sidecar agent does not handle manually rotated static database secret
- Inject the Agent as a native sidecar HOT 2
- Allow patching the Agent's configuration HOT 3
- vault agent export container port for scape metrics through podmonitor
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from vault-k8s.