Code Monkey home page Code Monkey logo

vault-on-google-kubernetes-engine's Introduction

Vault on Google Kubernetes Engine

This tutorial walks you through provisioning a multi-node HashiCorp Vault cluster on Google Kubernetes Engine.

Cluster Features

  • High Availability - The Vault cluster will be provisioned in multi-server mode for high availability.
  • Google Cloud Storage Storage Backend - Vault's data is persisted in Google Cloud Storage.
  • Production Hardening - Vault is configured and deployed based on the guidance found in the production hardening guide.
  • Auto Initialization and Unsealing - Vault is automatically initialized and unsealed at runtime. Keys are encrypted using Cloud KMS and stored on Google Cloud Storage.

Tutorial

Create a New Project

In this section you will create a new GCP project and enable the APIs required by this tutorial.

Generate a project ID:

PROJECT_ID="vault-$(($(date +%s%d)/1000000))"

Create a new GCP project:

gcloud projects create ${PROJECT_ID} \
  --name "${PROJECT_ID}"

Enable billing on the new project before moving on to the next step.

Enable the GCP APIs required by this tutorial:

gcloud services enable \
  cloudapis.googleapis.com \
  cloudkms.googleapis.com \
  container.googleapis.com \
  containerregistry.googleapis.com \
  iam.googleapis.com \
  --project ${PROJECT_ID}

Set Configuration

COMPUTE_ZONE="us-west1-c"
COMPUTE_REGION="us-west1"
GCS_BUCKET_NAME="${PROJECT_ID}-vault-storage"
KMS_KEY_ID="projects/${PROJECT_ID}/locations/global/keyRings/vault/cryptoKeys/vault-init"

Create KMS Keyring and Crypto Key

In this section you will create a Cloud KMS keyring and cryptographic key suitable for encrypting and decrypting Vault master keys and root tokens.

Create the vault kms keyring:

gcloud kms keyrings create vault \
  --location global \
  --project ${PROJECT_ID}

Create the vault-init encryption key:

gcloud kms keys create vault-init \
  --location global \
  --keyring vault \
  --purpose encryption \
  --project ${PROJECT_ID}

Create a Google Cloud Storage Bucket

Google Cloud Storage is used to persist Vault's data and hold encrypted Vault master keys and root tokens.

Create a GCS bucket:

gsutil mb -p ${PROJECT_ID} gs://${GCS_BUCKET_NAME}

Create the Vault IAM Service Account

An IAM service account is used by Vault to access the GCS bucket and KMS encryption key created in the previous sections.

Create the vault service account:

gcloud iam service-accounts create vault-server \
  --display-name "vault service account" \
  --project ${PROJECT_ID}

Grant access to the vault storage bucket:

gsutil iam ch \
  serviceAccount:vault-server@${PROJECT_ID}.iam.gserviceaccount.com:objectAdmin \
  gs://${GCS_BUCKET_NAME}
gsutil iam ch \
  serviceAccount:vault-server@${PROJECT_ID}.iam.gserviceaccount.com:legacyBucketReader \
  gs://${GCS_BUCKET_NAME}

Grant access to the vault-init KMS encryption key:

gcloud kms keys add-iam-policy-binding \
  vault-init \
  --location global \
  --keyring vault \
  --member serviceAccount:vault-server@${PROJECT_ID}.iam.gserviceaccount.com \
  --role roles/cloudkms.cryptoKeyEncrypterDecrypter \
  --project ${PROJECT_ID}

Provision a Kubernetes Cluster

In this section you will provision a three node Kubernetes cluster using Google Kubernetes Engine with access to the vault-server service account across the entire cluster.

Create the vault Kubernetes cluster:

gcloud container clusters create vault \
  --enable-autorepair \
  --machine-type e2-standard-2 \
  --service-account vault-server@${PROJECT_ID}.iam.gserviceaccount.com \
  --num-nodes 3 \
  --zone ${COMPUTE_ZONE} \
  --project ${PROJECT_ID}

Warning: Each node in the vault Kubernetes cluster has access to the vault-server service account. The vault cluster should only be used for running Vault. Other workloads should run on a different cluster and access Vault through an internal or external load balancer.

Provision IP Address

In this section you will create a public IP address that will be used to expose the Vault server to external clients.

Create the vault compute address:

gcloud compute addresses create vault \
  --region ${COMPUTE_REGION} \
  --project ${PROJECT_ID}

Store the vault compute address in an environment variable:

VAULT_LOAD_BALANCER_IP=$(gcloud compute addresses describe vault \
  --region ${COMPUTE_REGION} \
  --project ${PROJECT_ID} \
  --format='value(address)')

Generate TLS Certificates

In this section you will generate the self-signed TLS certificates used to secure communication between Vault clients and servers.

Create a Certificate Authority:

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

Generate the Vault TLS certificates:

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -hostname="vault,vault.default.svc.cluster.local,localhost,127.0.0.1,${VAULT_LOAD_BALANCER_IP}" \
  -profile=default \
  vault-csr.json | cfssljson -bare vault

Deploy Vault

In this section you will deploy the multi-node Vault cluster using a collection of Kubernetes and application configuration files.

Create the vault secret to hold the Vault TLS certificates:

cat vault.pem ca.pem > vault-combined.pem
kubectl create secret generic vault \
  --from-file=ca.pem \
  --from-file=vault.pem=vault-combined.pem \
  --from-file=vault-key.pem

The vault configmap holds the Google Cloud Platform settings required bootstrap the Vault cluster.

Create the vault configmap:

kubectl create configmap vault \
  --from-literal api-addr=https://${VAULT_LOAD_BALANCER_IP}:8200 \
  --from-literal gcs-bucket-name=${GCS_BUCKET_NAME} \
  --from-literal kms-key-id=${KMS_KEY_ID}

Create the Vault StatefulSet

In this section you will create the vault statefulset used to provision and manage two Vault server instances.

Create the vault statefulset:

kubectl apply -f vault.yaml
service "vault" created
statefulset "vault" created

At this point the multi-node cluster is up and running:

kubectl get pods
NAME      READY     STATUS    RESTARTS   AGE
vault-0   2/2       Running   0          1m
vault-1   2/2       Running   0          1m

Automatic Initialization and Unsealing

In a typical deployment Vault must be initialized and unsealed before it can be used. In our deployment we are using the vault-init container to automate the initialization and unseal steps.

kubectl logs vault-0 -c vault-init
2018/11/03 22:37:35 Starting the vault-init service...
2018/11/03 22:37:35 Get https://127.0.0.1:8200/v1/sys/health: dial tcp 127.0.0.1:8200: connect: connection refused
2018/11/03 22:37:45 Vault is not initialized. Initializing and unsealing...
2018/11/03 22:37:53 Encrypting unseal keys and the root token...
2018/11/03 22:37:53 Unseal keys written to gs://vault-1541283682815-vault-storage/unseal-keys.json.enc
2018/11/03 22:37:53 Root token written to gs://vault-1541283682815-vault-storage/root-token.enc
2018/11/03 22:37:53 Initialization complete.
2018/11/03 22:37:55 Unseal complete.
2018/11/03 22:37:55 Next check in 10s
2018/11/03 22:38:05 Vault is initialized and unsealed.
2018/11/03 22:38:05 Next check in 10s

The vault-init container runs every 10 seconds and ensures each vault instance is automatically unsealed.

Health Checks

A readiness probe is used to ensure Vault instances are not routed traffic when they are sealed.

Sealed Vault instances do not forward or redirect clients even in HA setups.

Expose the Vault Cluster

In this section you will expose the Vault cluster using an external network load balancer.

Generate the vault service configuration:

cat > vault-load-balancer.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
  name: vault-load-balancer
spec:
  type: LoadBalancer
  loadBalancerIP: ${VAULT_LOAD_BALANCER_IP}
  ports:
    - name: http
      port: 8200
    - name: server
      port: 8201
  selector:
    app: vault
EOF

Create the vault-load-balancer service:

kubectl apply -f vault-load-balancer.yaml

Wait until the EXTERNAL-IP is populated:

kubectl get svc vault-load-balancer
NAME                  TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)
vault-load-balancer   LoadBalancer   XX.XX.XXX.XXX   <pending>     8200:31805/TCP,8201:32754/TCP

Smoke Tests

Source the vault.env script to configure the vault CLI to use the Vault cluster via the external load balancer:

source vault.env

Get the status of the Vault cluster:

vault status
Key                    Value
---                    -----
Seal Type              shamir
Initialized            true
Sealed                 false
Total Shares           5
Threshold              3
Version                0.11.4
Cluster Name           vault-cluster-46821b83
Cluster ID             dcd56552-27d0-fa18-4ccc-25b252464971
HA Enabled             true
HA Cluster             https://XX.XX.X.X:8201
HA Mode                standby
Active Node Address    https://XX.XXX.XXX.XX:8200

Logging in

Download and decrypt the root token:

export VAULT_TOKEN=$(gsutil cat gs://${GCS_BUCKET_NAME}/root-token.enc | \
  base64 --decode | \
  gcloud kms decrypt \
    --project ${PROJECT_ID} \
    --location global \
    --keyring vault \
    --key vault-init \
    --ciphertext-file - \
    --plaintext-file - 
)

Working with Secrets

The following examples assume Vault 0.11 or later.

vault secrets enable -version=2 kv
vault kv enable-versioning secret/
vault kv put secret/my-secret my-value=s3cr3t
vault kv get secret/my-secret

Clean Up

Ensure you are working with the right project ID:

echo $PROJECT_ID

Delete the project:

gcloud projects delete ${PROJECT_ID}

vault-on-google-kubernetes-engine's People

Contributors

andrewwatson avatar dparrish avatar fridiculous avatar kelseyhightower avatar leewalter avatar sethvargo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vault-on-google-kubernetes-engine's Issues

Vault is sealed. Unsealing... storage: object doesn't exist

Thanks for putting this together man I love your work!

I'm hoping you can help me resolve the issue I'm having. I've gone through the instructions several times and I keep running into the same "storage: object doesn't exists" error when init container is trying to unseal the vault.

The missing storage object is unseal-keys.json.enc.

For some reason the init container is not able to authenticate to the vault API and unable to generate unseal-keys.json.enc?

The only changes I made to the instructions were to use the us-central region and I had to remove cluster-version because 1.11.2-gk3.9 is no longer supported.

$ kubectl logs vault-0 -c vault
==> Vault server configuration:

             Api Address: https://35.###.53.###:8200
                     Cgo: disabled
         Cluster Address: https://10.#.1.#:8201
              Listener 1: tcp (addr: "0.0.0.0:8200", cluster address: "0.0.0.0:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "enabled")
               Log Level: (not set)
                   Mlock: supported: true, enabled: true
                 Storage: gcs (HA available)
                 Version: Vault v0.11.4
             Version Sha: 612120e76de651ef669c9af5e77b27a749b0dba3

==> Vault server started! Log data will stream in below:
$ kubectl logs vault-0 -c vault-init
2018/11/27 08:29:46 Starting the vault-init service...
2018/11/27 08:29:46 Get https://127.0.0.1:8200/v1/sys/health: dial tcp 127.0.0.1:8200: connect: connection refused
2018/11/27 08:29:56 Vault is sealed. Unsealing...
2018/11/27 08:29:57 storage: object doesn't exist
2018/11/27 08:29:57 Next check in 10s
2018/11/27 08:30:07 Vault is sealed. Unsealing...
2018/11/27 08:30:07 storage: object doesn't exist
2018/11/27 08:30:07 Next check in 10s
2018/11/27 08:30:17 Vault is sealed. Unsealing...
2018/11/27 08:30:17 storage: object doesn't exist
2018/11/27 08:30:17 Next check in 10s
2018/11/27 08:30:27 Vault is sealed. Unsealing...
2018/11/27 08:30:27 storage: object doesn't exist
2018/11/27 08:30:27 Next check in 10s
2018/11/27 08:30:37 Vault is sealed. Unsealing...
2018/11/27 08:30:37 storage: object doesn't exist
2018/11/27 08:30:37 Next check in 10s
2018/11/27 08:30:47 Vault is sealed. Unsealing...
2018/11/27 08:30:47 storage: object doesn't exist
2018/11/27 08:30:47 Next check in 10s
2018/11/27 08:30:57 Vault is sealed. Unsealing...
2018/11/27 08:30:57 storage: object doesn't exist
2018/11/27 08:30:57 Next check in 10s
2018/11/27 08:31:08 Vault is sealed. Unsealing...
2018/11/27 08:31:08 storage: object doesn't exist
2018/11/27 08:31:08 Next check in 10s
2018/11/27 08:31:18 Vault is sealed. Unsealing...
2018/11/27 08:31:18 storage: object doesn't exist
2018/11/27 08:31:18 Next check in 10s
2018/11/27 08:31:28 Vault is sealed. Unsealing...
2018/11/27 08:31:28 storage: object doesn't exist
2018/11/27 08:31:28 Next check in 10s
2018/11/27 08:31:38 Vault is sealed. Unsealing...
2018/11/27 08:31:38 storage: object doesn't exist
2018/11/27 08:31:38 Next check in 10s
2018/11/27 08:31:48 Vault is sealed. Unsealing...
2018/11/27 08:31:48 storage: object doesn't exist
2018/11/27 08:31:48 Next check in 10s
2018/11/27 08:31:58 Vault is sealed. Unsealing...
2018/11/27 08:31:58 storage: object doesn't exist
2018/11/27 08:31:58 Next check in 10s
2018/11/27 08:32:08 Vault is sealed. Unsealing...
2018/11/27 08:32:08 storage: object doesn't exist
2018/11/27 08:32:08 Next check in 10s
2018/11/27 08:32:19 Vault is sealed. Unsealing...
2018/11/27 08:32:19 storage: object doesn't exist
2018/11/27 08:32:19 Next check in 10s
2018/11/27 08:32:29 Vault is sealed. Unsealing...
2018/11/27 08:32:29 storage: object doesn't exist
2018/11/27 08:32:29 Next check in 10s```

[Question] Should the ca.pem be persisted for further access in KMS?

Hello and thank you for very detailed tutorial.
Have a question related to keeping cert authority certificate,

As long as VAULT_CACERT=ca.pem is set the vault cluster can be accessed.
But, the cleanup script will remove old temporary files including the ca.pem
If this file is not persistent, you will end up adding -tls-skip-verify flag to vault commands; otherwise x509: certificate signed by unknown authority error will appear.

Question, can this cert file be fetched somehow later (similar we do with VAULT_TOKEN)?
Or, should the tutorial be updated to add an instruction how to encrypt the file and store it in the KMS key-ring?

vault status fails

when running vault status i am getting the following

Error checking seal status: Get https://x.x.x.x:8200/v1/sys/seal-status: dial tcp x.x.x.x:8200: i/o timeout

I routed through to the shell on one of the vault pods and ran a "vault status" which gave me the below error.

Error checking seal status: Get https://127.0.0.1:8200/v1/sys/seal-status: x509: certificate signed by unknown authority

I have verified the 3 certs are within the tls location on the server.

Can you give me any help with this?

Vault PODS are in pending state for ever

Hi, my vault pods are in pending state for ever after helm install "HA mode, as integrated storage"
ubuntu@ip-172-31-12-183:~$ kubectl get pods --selector='app.kubernetes.io/name=vault' --namespace='vault'
NAME READY STATUS RESTARTS AGE
vault-0 0/1 Pending 0 20m
vault-1 0/1 Pending 0 20m

could anyone please help me out to get rid of this error?

"https://127.0.0.1:8200/v1/sys/health: x509: certificate signed by unknown authority" during pod startup

While trying to reproduce tutorial i've faced an obsolete config line (i guess) and finally i stuck on the error which is the name of the topic.

Changes i made to the vault.yaml:
lines

             - name: vault-init
                image: gcr.io/hightowerlabs/vault-init
 were changed to 
             - name: vault-init
                image: sethvargo/vault-init

After the modification of the config vault-init builds up and starts succesfully, however second image (vault itself) cannot start due to "certificate signed by unknown authority" issue, i've seen a thread with same issue (hashicorp/vault#7400), but in current version of config (vault.yaml) no similar definitions present. I'm newbie into writing kubernetes configs so i kinda stuck, would be much appreciated for the tip where to make a correction.

As a test i've run check of the certificate:
openssl verify -verbose -CAfile ca.pem vault.pem vault-combined.pem
which return OK.

Logs from the kubectl describe pod:

Type     Reason     Age   From               Message
----     ------     ----  ----               -------
Normal   Scheduled  12s   default-scheduler  Successfully assigned default/vault-0 to gke-vault-default-pool-d74029c5-zqwh
Normal   Pulling    12s   kubelet            Pulling image "busybox"
Normal   Pulled     11s   kubelet            Successfully pulled image "busybox" in 244.728664ms (244.755034ms including waiting)
Normal   Created    11s   kubelet            Created container config
Normal   Started    11s   kubelet            Started container config
Normal   Pulling    10s   kubelet            Pulling image "sethvargo/vault-init"
Normal   Pulled     9s    kubelet            Successfully pulled image "sethvargo/vault-init" in 909.476256ms (909.521213ms including waiting)
Normal   Created    9s    kubelet            Created container vault-init
Normal   Started    9s    kubelet            Started container vault-init
Normal   Pulled     9s    kubelet            Container image "hashicorp/vault" already present on machine
Normal   Created    9s    kubelet            Created container vault
Normal   Started    9s    kubelet            Started container vault
Warning  Unhealthy  1s    kubelet            Readiness probe failed: HTTP probe failed with statuscode: 501

Logs from the kubectl logs vault-0 -c vault-init

2023/11/02 19:17:49 Starting the vault-init service...
2023/11/02 19:17:49 Head "https://127.0.0.1:8200/v1/sys/health": dial tcp 127.0.0.1:8200: connect: connection refused
2023/11/02 19:17:59 Head "https://127.0.0.1:8200/v1/sys/health": x509: certificate signed by unknown authority
2023/11/02 19:18:09 Head "https://127.0.0.1:8200/v1/sys/health": x509: certificate signed by unknown authority
2023/11/02 19:18:19 Head "https://127.0.0.1:8200/v1/sys/health": x509: certificate signed by unknown authority

Error initializing core: Failed to lock memory: cannot allocate memory

@sethvargo I an issue with lock memory.

Error initializing core: Failed to lock memory: cannot allocate memory

This usually means that the mlock syscall is not available.
Vault uses mlock to prevent memory from being swapped to
disk. This requires root privileges as well as a machine
that supports mlock. Please enable mlock on your system or
disable Vault from using it. To disable Vault from using it,
set the `disable_mlock` configuration option in your configuration
file.

Do you have any recommendation on resolving this issue without introducing any compromises?

Vault pod can't initialize

I am stuck on the step that deploys Vault. Pod is waiting in the Init:0/1 stage forever.

$ kubectl get pods --watch
NAME      READY     STATUS     RESTARTS   AGE
vault-0   0/2       Init:0/1   0          6s

I am note sure how it can be but initialization container is waiting because PodInitializing.

$ kubectl logs vault-0 -c config
Error from server (BadRequest): container "config" in pod "vault-0" is waiting to start: PodInitializing

satefulSet tells me the pod created successfully though

Events:
  Type    Reason            Age   From                    Message
  ----    ------            ----  ----                    -------
  Normal  SuccessfulCreate  9m    statefulset-controller  create Pod vault-0 in StatefulSet vault successful

The error reproduces on 1.10.5-gke.0 as well the original GKE cluster version.

CrashLoopBackOff When Deploying Vault

Getting a continuous back-off and restart. Only thing I changed from the walk-through was the GKE cluster version to 1.11.10-gke.5 (1.11.2-gke.9 threw error).

501 error?:

Normal   Pulling    2m30s                 kubelet, gke-vault-default-pool-d388ce56-gcx6  pulling image "busybox"
  Normal   Scheduled  2m30s                 default-scheduler                              Successfully assigned default/vault-0 to gke-vault-default-pool-d388ce56-gcx6
  Normal   Pulled     2m29s                 kubelet, gke-vault-default-pool-d388ce56-gcx6  Successfully pulled image "busybox"
  Normal   Created    2m29s                 kubelet, gke-vault-default-pool-d388ce56-gcx6  Created container
  Normal   Started    2m29s                 kubelet, gke-vault-default-pool-d388ce56-gcx6  Started container
  Normal   Pulling    2m23s                 kubelet, gke-vault-default-pool-d388ce56-gcx6  pulling image "vault:0.11.4"
  Normal   Pulled     2m19s                 kubelet, gke-vault-default-pool-d388ce56-gcx6  Successfully pulled image "vault:0.11.4"
  Normal   Started    2m18s                 kubelet, gke-vault-default-pool-d388ce56-gcx6  Started container
  Normal   Created    2m18s                 kubelet, gke-vault-default-pool-d388ce56-gcx6  Created container
  Normal   Pulling    2m4s (x3 over 2m28s)  kubelet, gke-vault-default-pool-d388ce56-gcx6  pulling image "gcr.io/hightowerlabs/vault-init"
  Normal   Created    2m3s (x3 over 2m23s)  kubelet, gke-vault-default-pool-d388ce56-gcx6  Created container
  Normal   Started    2m3s (x3 over 2m23s)  kubelet, gke-vault-default-pool-d388ce56-gcx6  Started container
  Normal   Pulled     2m3s (x3 over 2m24s)  kubelet, gke-vault-default-pool-d388ce56-gcx6  Successfully pulled image "gcr.io/hightowerlabs/vault-init"
  Warning  BackOff    2m2s (x3 over 2m16s)  kubelet, gke-vault-default-pool-d388ce56-gcx6  Back-off restarting failed container
  Warning  Unhealthy  2m1s (x2 over 2m11s)  kubelet, gke-vault-default-pool-d388ce56-gcx6  Readiness probe failed: HTTP probe failed with statuscode: 501

storage migration check error

Hi Folks,

My Vault cluster was working properly yesterday to give the master authorized access i had to deploy it again with terraform code but not is giving storage access issue ,

Vault is using GCS as backend
Getting below error in stack driver,

[WARN] storage migration check error: error="failed to read value for "core/migration": googleapi: got HTTP response code 403 with body: AccessDeniedAccess denied.

Primary: /namespaces/service account with additional claims does not have storage.objects.get access to the Google Cloud Storage object.
"

The status of Vault pod,
containers with unready status: [vault]

Anyone faced this issue?

Tutorial doesn't convert the string id in ZSH

In the first step of the tutorial, the PROJECT_ID errors out when attempting to divide the datestring by a number in zsh.

$ PROJECT_ID="vault-$(($(date +%s%N)/1000000))"
zsh: bad math expression: operator expected at `N/1000000'

For context

$ zsh --version
zsh 5.3 (x86_64-apple-darwin17.0)

[Question] - How to enable audit devices to stdout and save in Stackdriver Logs?

Hello,
First of all, thanks for the awesome tutorial. It is very handy.

We have implemented this in our production cluster and were having issues getting the audit device logs to Stackdriver logs.

I have enabled the audit device to stdout by doing the following:

vault audit enable file file_path=stdout

Which I can confirm that is outputting to stdout on the vault container, if I check the logs with:

kubectl logs vault-0 -f vault

But unfortunately those logs are not being saved in Stackdriver for some reason, and I was not able to find more info on how to enable or troubleshoot it. See picture below for my stackdriver log on the vault container:

screen shot 2018-10-11 at 2 17 54 pm

Thanks in advance for the help.

Sam.

[Self-solved issue] Pulling images from gcr.io

@sethvargo I'm using my private gcr.io to host vault-enterprise and my GKE cluster will need to pull images from GCR so, I updated the service_account_iam_roles in the variables.tf file to include the roles/storage.objectViewer which allows the GKE cluster to pull images stored on gcr.io.

variable "service_account_iam_roles" {
  type = "list"

  default = [
    "roles/logging.logWriter",
    "roles/monitoring.metricWriter",
    "roles/monitoring.viewer",
    "roles/storage.objectViewer",
  ]
}

That solved my issue of pulling images stored in gcr.io. This is just for the benefit of others.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.