Code Monkey home page Code Monkey logo

gke-managed-certs's Introduction

Managed Certificates

Managed Certificates simplify user flow in managing HTTPS traffic. Instead of manually acquiring an SSL certificate from a Certificate Authority, configuring it on the load balancer and renewing it on time, now it is only necessary to create a Managed Certificate Custom Resource object and provide the domains for which you want to obtain a certificate. The certificate will be auto-renewed when necessary.

For that to work you need to run your cluster on a platform with Google Cloud Load Balancer, that is a cluster in GKE or your own cluster in GCP.

In GKE all the components are already installed. Follow the how-to for more information. For a GCP setup follow the instructions below.

This feature status is GA.

Installation

Managed Certificates consist of two parts:

  • managed-certificate-controller which uses GCP Compute API to manage certificates securing your traffic,
  • Managed Certificate CRD which is needed to tell the controller what domains you want to secure.

Limitations

  • Managed Certificates support multi-SAN non-wildcard certificates.
  • Managed Certificates are compatible only with GKE Ingress.
  • A single ManagedCertificate supports up to 100 domain names.
  • A single Ingress supports up to 15 certificates, and all types of certificates count towards the limit.
  • A GCP project supports up to ssl_certificates quota of certificates.

Prerequisites

  1. You need to use a Kubernetes cluster with GKE-Ingress v1.5.1+.
    • Managed Certificates have been tested against Kubernetes v1.19.0.
    • Kubernetes v1.15+ most likely will work as well.
    • Kubernetes v1.13-v1.15 most likely will work if you enable the CustomResourceWebhookConversion feature, otherwise Managed Certificate CRD validation will not work properly.
  2. You need to grant permissions to the controller so that it is allowed to use the GCP Compute API.
    • When creating the cluster, add scope compute-rw to the node where you will run the pod with managed-certificate-controller.
    • Alternatively:
      • Create a dedicated service account with minimal roles.
        export NODE_SA_NAME=mcrt-controller-sa
        gcloud iam service-accounts create $NODE_SA_NAME --display-name "managed-certificate-controller service account"
        export NODE_SA_EMAIL=`gcloud iam service-accounts list --format='value(email)' \
        --filter='displayName:managed-certificate-controller'`
        
        export PROJECT=`gcloud config get-value project`
        gcloud projects add-iam-policy-binding $PROJECT --member serviceAccount:$NODE_SA_EMAIL \
        --role roles/monitoring.metricWriter
        gcloud projects add-iam-policy-binding $PROJECT --member serviceAccount:$NODE_SA_EMAIL \
        --role roles/monitoring.viewer
        gcloud projects add-iam-policy-binding $PROJECT --member serviceAccount:$NODE_SA_EMAIL \
        --role roles/logging.logWriter
      • Grant additional role roles/compute.loadBalancerAdmin to your service account.
        gcloud projects add-iam-policy-binding $PROJECT --member serviceAccount:$NODE_SA_EMAIL \
        --role roles/compute.loadBalancerAdmin
      • Export a service account key to a JSON file.
        gcloud iam service-accounts keys create ./key.json --iam-account $NODE_SA_EMAIL
      • Create a Kubernetes Secret that holds the service account key stored in key.json.
        kubectl create secret generic sa-key --from-file=./key.json
      • Mount the sa-key secret to managed-certificate-controller pod. In file deploy/managed-certificate-controller.yaml add:
        • Above section volumeMounts
          env:
            - name: GOOGLE_APPLICATION_CREDENTIALS
              value: "/etc/gcp/key.json"
          
        • In section volumeMounts
          - name: sa-key-volume
            mountPath: /etc/gcp
            readOnly: true
          
        • In section volumes
          - name: sa-key-volume
            secret:
              secretName: sa-key
              items:
              - key: key.json
                path: key.json
          
  3. Configure your domain example.com so that it points at the load balancer created for your cluster by Ingress. If you add a CAA record to restrict the CAs that are allowed to provision certificates for your domain, note that Managed Certificates currently support:
    • Google Trust Services,
    • Let's Encrypt. In the future additional CAs may be available and a CAA record may make it impossible for you to take advantage of them.

Steps

To install Managed Certificates in your own cluster in GCP, you need to:

  1. Deploy the Managed Certificate CRD
    $ kubectl create -f deploy/managedcertificates-crd.yaml
  2. Deploy the managed-certificate-controller You may want to build your own managed-certificate-controller image and reference it in the deploy/managed-certificate-controller.yaml file. The default image is periodically built by a CI system and may not be stable. Alternatively you may use gcr.io/gke-release/managed-certificate-controller:v1.2.11 which is deployed in GKE, however this README likely will not be kept up to date with future GKE updates, and so this image may become stale.
    $ kubectl create -f deploy/managed-certificate-controller.yaml

Usage

  1. Create a Managed Certificate custom object, specifying up to 100 non-wildcard domains not longer than 63 characters each, for which you want to obtain a certificate:
    apiVersion: networking.gke.io/v1
    kind: ManagedCertificate
    metadata:
      name: example-certificate
    spec:
      domains:
      - example1.com
      - example2.com
  2. Configure Ingress to use this custom object to terminate SSL connections:
    $ kubectl annotate ingress [your-ingress-name] networking.gke.io/managed-certificates=example-certificate

If you need, you can specify multiple managed certificates here, separating their names with commas.

Clean up

You can do the below steps in any order to turn SSL off:

  • Remove annotation from Ingress
    $ kubectl annotate ingress [your-ingress-name] networking.gke.io/managed-certificates-
    (note the minus sign at the end of annotation name)
  • Tear down the controller
    $ kubectl delete -f deploy/managed-certificate-controller.yaml
  • Tear down the Managed Certificate CRD
    $ kubectl delete -f deploy/managedcertificates-crd.yaml

Troubleshooting

  1. Check Kubernetes events attached to ManagedCertificate and Ingress resources for information on temporary failures.

  2. Use the same ManagedCertificate resource at every endpoint to which your domain resolves to.

    A real life example is when your example.com domain points at two IP addresses, one for IPv4 and one for IPv6. You deploy two Ingress objects to handle IPv4 and IPv6 traffic separately. If you create two separate ManagedCertificate resources and attach each of them to one of the Ingresses, one of the ManagedCertificate resources may not be provisioned. The reason is that the Certificate Authority is free to verify challenges on any of the IP addresses the domain resolves to.

  3. Managed Certificates communicate with GKE Ingress using annotation kubernetes.io/pre-shared-cert. Problems may arise for instance if you:

    • forcibly keep clearing this annotation,
    • store a snapshot of Ingress, tear it down and restore Ingress from the snapshot. In the meantime an SslCertificate resource listed in the pre-shared-cert annotation may not be available any more. Ingress has all-or-nothing semantics and will not work if a certificate it references is missing.

gke-managed-certs's People

Contributors

krzykwas avatar michallowicki avatar sawsa307 avatar wewark avatar zioproto avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gke-managed-certs's Issues

Pre 0.4.2/GKE 1.16.8-gke.3: bug quickly re-creating a ManagedCertificate

Pre 0.4.2/GKE 1.16.8-gke.3 with a low frequency there can occur a bug in handling fast re-creation of a ManagedCertificate resource.

When a ManagedCertificate is deleted, a cleanup process starts, intending to remove the accompanying GCP resource. If the certificate is re-created - before the cleanup process has finished - the certificate may become stuck in an invalid state.

Diagnosis:

  1. The certificate has status FailedNotVisible.
  2. In the internal state of the GKE Managed Certificates controller the certificate will be SoftDeleted: true, check $ kubectl describe configmap managed-certificate-config -n kube-system
  3. The Ingress annotation managed-certificates does not include the certificate in this invalid state.
  4. The Ingress annotation pre-shared-cert should not include the certificate in this invalid state either, but if it is the only certificate attached to Ingress pre GKE 1.16.0-gke.20, it won't be detached because of a different Ingress issue; in this case the pre-shared-cert annotation will include the SslCertificate.

Workaround:

You need to delete the ManagedCertificate and allow up to 2 minutes for the GKE Managed Certificates controller to finish the clean up process.

Pre GKE 1.16.0-gke.20, because of Ingress not releasing the last certificate, the clean up process cannot succeed. You have the following options:

  • tear down Ingress,
  • add a temporary ManagedCertificate only to make Ingress detach the one you need to clean up.

To fix the faulty certificate:

  1. detach the ManagedCertificate resource from Ingress (remove from the managed-certificates annotation)
  2. delete the ManagedCertificate
  3. after 2 minutes re-create the ManagedCertificate and attach it to Ingress.

Provide own private key and target for saving a cert

Not sure if this project is limited to GKE LB, or it may evolve into more generic use.

Would be useful to provide own private key from kubernetes secret and specify target for cert.

sslCertificates API resource does support providing a private key and exposes generated certs: https://cloud.google.com/compute/docs/reference/rest/v1/sslCertificates

Use case is to be able to use managed certificate without LB. E.g. provisioning public certs for kafka cluster.

Something along the lines:

spec:
  domains:
    - example.com
  providedKeySecret:
    - secret-with-key
  targetCertSecret:
    - secret-example-com-crt

Maybe its not intention of this project and I need to look somewhere into jetstack/cert-manager GKE sslCertificates based issuer..

Controller is unable to provision certificates due to "Insufficient Permission"

When describing the resource:

Warning  BackendError  13s (x36 over 20m)  managed-certificate-controller  googleapi: Error 403: Insufficient Permission: Request had insufficient authentication scopes., insufficientPermissions

I also added the Compute Load Balance Admin permission to the GKE service account as described here: #7, but am still getting that error.

I'm not really sure what I'm missing here.

Detailed setup how-to by @wbyoung

I managed to get this working today after reviewing this issue and various other issues on this repository. Here's what I had to do:

A few variables that you'll need to customize that will be used throughout:

PROJECT_ID="account-id-1234"
ACCOUNT_EMAIL="[email protected]"

Download the CRD and controller manifests and define a few patches to use with the controller via Kustomize (note that the config files are all ending up in a sub-directory called gke and that we leave that at the end of these commands).

mkdir gke; cd gke

curl --remote-name-all \
  https://raw.githubusercontent.com/GoogleCloudPlatform/gke-managed-certs/v0.3.0/deploy/managedcertificates-crd.yaml \
  https://raw.githubusercontent.com/GoogleCloudPlatform/gke-managed-certs/v0.3.0/deploy/managed-certificate-controller.yaml

cat > managed-certificate-controller-secrets.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: managed-certificate-controller
spec:
  template:
    spec:
      containers:
        - name: managed-certificate-controller
          env:
          - name: GOOGLE_APPLICATION_CREDENTIALS
            value: "/var/run/credentials/service-account-key.json"
          volumeMounts:
            - name: google-application-credentials
              mountPath: "/var/run/credentials"
              readOnly: true
      volumes:
        - name: google-application-credentials
          secret:
            secretName: gke-managed-certs-credentials
EOF

cat > kustomization.yml <<EOF
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- managedcertificates-crd.yaml
- managed-certificate-controller.yaml
patches:
- managed-certificate-controller-secrets.yaml
EOF

cd ..

The above patch, managed-certificate-controller-secrets.yml, sets up so a volume will be mounted to access the secret file, and an environment variable has been defined that points to the file (as was shown is possible by @bmhatfield here). If you don't really know much about Kustomize, you can just edit the controller manifest manually. Here's the full manifest w/ the patch applied if this is confusing to you.

The next block of commands will take care of the following:

  • Create a new service account that can be used for the controller.
  • Create a custom role that will be assigned to that service account.
  • Assign the required permissions (referred to as compute-rw by @krzykwas here and enumerated by @bmhatfield here) to the custom role.
  • Export the keys for the service account so they can later be added as a Kubernetes secret.
gcloud iam service-accounts create gke-managed-certs \
  --display-name "GKE Managed Certs"

gcloud iam roles create gke_managed_certs_role \
  --project $PROJECT_ID \
  --title "GKE Managed Certs Role" \
  --description "Read & write permissions for GKE Managed Certs" \
  --permissions compute.sslCertificates.create,compute.sslCertificates.delete,compute.sslCertificates.get,compute.sslCertificates.list \
  --stage BETA

gcloud projects add-iam-policy-binding $PROJECT_ID \
  --member serviceAccount:gke-managed-certs@$PROJECT_ID.iam.gserviceaccount.com \
  --role projects/$PROJECT_ID/roles/gke_managed_certs_role

gcloud iam service-accounts keys create ./service-account-key.json \
  --iam-account gke-managed-certs@$PROJECT_ID.iam.gserviceaccount.com

Create the container and get the kubectl context all set up as normal:

gcloud container clusters create \
  --machine-type=g1-small \
  --num-nodes=2 \
  --disk-size=10GB \
  ssl-test

gcloud container clusters get-credentials ssl-test

Now start sending things off to your cluster via kubectl:

  • Create the secret for the service account.
  • Add the cluster-admin role to the executing user as explained here.
  • Deploy the GKE Managed Certs CRD and controller w/ the patches applied by using kustomize.
  • Deploy an app alongside the Ingress w/ the SSL annotations.
kubectl create secret generic gke-managed-certs-credentials \
  --from-file=./service-account-key.json

kubectl create clusterrolebinding admin-binding \
  --clusterrole=cluster-admin \
  --user=$ACCOUNT_EMAIL

# this creates the CRD & controller w/ our patch
kustomize build ./gke | kubectl apply -f -

# deply a simple app w/ certs
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world
spec:
  replicas: 3
  selector:
    matchLabels:
      app: hello-world
  template:
    metadata:
      labels:
        app: hello-world
    spec:
      containers:
      - name: hello-world
        image: gcr.io/google-samples/hello-app:1.0
---
apiVersion: v1
kind: Service
metadata:
  name: hello-world
spec:
  type: NodePort
  selector:
    app: hello-world
  ports:
  - protocol: TCP
    port: 8080
---
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
  name: ssl-test
spec:
  domains:
    - ssl-test.my-domain.com
---
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
  name: ssl-test2
spec:
  domains:
    - ssl-test2.my-domain.com
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ssl-test
  annotations:
    networking.gke.io/managed-certificates: "ssl-test,ssl-test2"
spec:
  backend:
    serviceName: hello-world
    servicePort: 8080
EOF

kubectl get ingress -w

Now wait for your load balancer to be created & assigned an external IP address. At that point, you can update your DNS records to point to that IP & wait for the SSL cert to become active.

If you want to tear this down so you don't get billed:

kubectl delete service hello-world # allows the load balancer to be deleted
gcloud container clusters delete ssl-test

Note that this does not delete the service account/role/keys that were created. Feel free to do that if you wish.

Originally posted by @wbyoung in #9 (comment)

Domains need to have A record, CNAME does not work

Hello!

I don't know if this is a known issue altho I've faced it using gke-managed-certs.
If domain used for cert is a CNAME record then it does not work - even though it resolves to LB IP address.

Status:
  Certificate Name:    XXXX
  Certificate Status:  Provisioning
  Domain Status:
    Domain:  api.k8s.ansible.london
    Status:  FailedNotVisible
Events:      <none>

LetsEncrypt as far I know allows to use CNAME - so I would assume this is a managed-certs issue.

No frontend configured on load balancer

Pretty excited to see this out there so maybe I jumped the gun a bit but I can't get it to work with my cluster.

Upgraded cluster master to 1.10.7-gke.2, waited for that to propagate to all pods. Created Custom Resource Definition and Controller (removed the serviceAccountName: test-account line within the controller so it should just use default account).

Created the object:

apiVersion: gke.googleapis.com/v1alpha1
kind: ManagedCertificate
metadata:
  name: api-test-certificate
spec:
  domains:
    - apitest.mydomain.co

Edited my Ingress, deleted and created, so now based on this:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: api-ingress
  annotations:
    kubernetes.io/ingress.global-static-ip-name: api-test-load-balancer
    kubernetes.io/ingress.allow-http: "false"      
    gke.googleapis.com/managed-certificates: api-test-certificate
spec:
  rules:
  ...

Result: LB gets created, but within GCP Dashboard I see the message 'This load balancer has no frontend configured.'

Within the K8s engine under the api-ingress details it seems stuck at 'creating ingress'.

I'm probably missing something critical!

Errors you may encounter when upgrading the library

(The purpose of this report is to alert GoogleCloudPlatform/gke-managed-certs to the possible problems when GoogleCloudPlatform/gke-managed-certs try to upgrade the following dependencies)

An error will happen when upgrading library prometheus/client_golang:

github.com/prometheus/client_golang

-Latest Version: v1.7.1 (Latest commit fe7bd95 7 days ago)
-Where did you use it:
https://github.com/GoogleCloudPlatform/gke-managed-certs/search?q=prometheus%2Fclient_golang%2Fprometheus&unscoped_q=prometheus%2Fclient_golang%2Fprometheus
-Detail:

github.com/prometheus/client_golang/go.mod

module github.com/prometheus/client_golang
require (
	github.com/beorn7/perks v1.0.1
	github.com/cespare/xxhash/v2 v2.1.1
	…
)
go 1.11

github.com/prometheus/client_golang/prometheus/desc.go

package prometheus
import (
	"github.com/cespare/xxhash/v2"
	…
)

This problem was introduced since prometheus/client_golang v1.2.0(committed 9a2ab94 on 16 Oct 2019) .Now you used version v1.1.0. If you try to upgrade prometheus/client_golang to version v1.2.0 and above, you will get an error--- no package exists at " github.com/cespare/xxhash/v2 "

I investigated the libraries (prometheus/client_golang >= v1.2.0) release information and found the root cause of this issue is that----

  1. These dependencies all added Go modules in the recent versions.

  2. They all comply with the specification of "Releasing Modules for v2 or higher" available in the Modules documentation. Quoting the specification:

A package that has migrated to Go Modules must include the major version in the import path to reference any v2+ modules. For example, Repo github.com/my/module migrated to Modules on version v3.x.y. Then this repo should declare its module path with MAJOR version suffix "/v3" (e.g., module github.com/my/module/v3), and its downstream project should use "github.com/my/module/v3/mypkg" to import this repo’s package.

  1. This "github.com/my/module/v3/mypkg" is not the physical path. So earlier versions of Go (including those that don't have minimal module awareness) plus all tooling (like dep, glide, govendor, etc) don't have minimal module awareness as of now and therefore don't handle import paths correctly See golang/dep#1962, golang/dep#2139.

Note: creating a new branch is not required. If instead you have been previously releasing on master and would prefer to tag v3.0.0 on master, that is a viable option. (However, be aware that introducing an incompatible API change in master can cause issues for non-modules users who issue a go get -u given the go tool is not aware of semver prior to Go 1.11 or when module mode is not enabled in Go 1.11+).
Pre-existing dependency management solutions such as dep currently can have problems consuming a v2+ module created in this way. See for example dep#1962.
https://github.com/golang/go/wiki/Modules#releasing-modules-v2-or-higher

Solution

1. Migrate to Go Modules.

Go Modules is the general trend of ecosystem, if you want a better upgrade package experience, migrating to Go Modules is a good choice.

Migrate to modules will be accompanied by the introduction of virtual paths(It was discussed above).

This "github.com/my/module/v3/mypkg" is not the physical path. So Go versions older than 1.9.7 and 1.10.3 plus all third-party dependency management tools (like dep, glide, govendor, etc) don't have minimal module awareness as of now and therefore don't handle import paths correctly.

Then the downstream projects might be negatively affected in their building if they are module-unaware (Go versions older than 1.9.7 and 1.10.3; Or use third-party dependency management tools, such as: Dep, glide, govendor…).

2. Maintaining v2+ libraries that use Go Modules in Vendor directories.

If GoogleCloudPlatform/gke-managed-certs want to keep using the dependency manage tools (like dep, glide, govendor, etc), and still want to upgrade the dependencies, can choose this fix strategy.
Manually download the dependencies into the vendor directory and do compatibility dispose(materialize the virtual path or delete the virtual part of the path). Avoid fetching the dependencies by virtual import paths. This may add some maintenance overhead compared to using modules.

As the import paths have different meanings between the projects adopting module repos and the non-module repos, materialize the virtual path is a better way to solve the issue, while ensuring compatibility with downstream module users. A textbook example provided by repo github.com/moby/moby is here:
https://github.com/moby/moby/blob/master/VENDORING.md
https://github.com/moby/moby/blob/master/vendor.conf
In the vendor directory, github.com/moby/moby adds the /vN subdirectory in the corresponding dependencies.
This will help more downstream module users to work well with your package.

3. Request upstream to do compatibility processing.

The prometheus/client_golang have 1039 module-unaware users in github, such as: AndreaGreco/mqtt_sensor_exporter, seekplum/plum_exporter, arl/monitoring…
https://github.com/search?q=prometheus%2Fclient_golang+filename%3Avendor.conf+filename%3Avendor.json+filename%3Aglide.toml+filename%3AGodep.toml+filename%3AGodep.json

Summary

You can make a choice when you meet this DM issues by balancing your own development schedules/mode against the affects on the downstream projects.

For this issue, Solution 1 can maximize your benefits and with minimal impacts to your downstream projects the ecosystem.

References

Do you plan to upgrade the libraries in near future?
Hope this issue report can help you ^_^
Thank you very much for your attention.

Best regards,
Kate

FailedNotVisible for domains w/ DNS managed by CloudFlare (proxy)

Some details:

  • I have a domain synd.io.
  • This domains DNS is managed by CloudFlare.
  • An A record for ce-staging points to my external static IP 34.98.108.89 but dig resolves to the CloudFlare proxy IPs
kpurdon@syndio: ~ dig ce-staging.synd.io.

; <<>> DiG 9.10.6 <<>> ce-staging.synd.io.
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 3902
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;ce-staging.synd.io.		IN	A

;; ANSWER SECTION:
ce-staging.synd.io.	215	IN	A	104.25.113.9
ce-staging.synd.io.	215	IN	A	104.25.112.9

;; Query time: 25 msec
;; SERVER: 192.168.86.1#53(192.168.86.1)
;; WHEN: Mon Mar 16 18:43:01 MDT 2020
;; MSG SIZE  rcvd: 79
  • I created a managed certificate
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
  name: ceweb-staging
spec:
  domains:
    - ce-staging.synd.io
  • I created an ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ceweb
  annotations:
    kubernetes.io/ingress.allow-http: "false"
    kubernetes.io/ingress.global-static-ip-name: ceweb
    networking.gke.io/managed-certificates: ceweb-staging
spec:
  backend:
    serviceName: ceweb
    servicePort: 80

Everything works, except the domain status always results in FailedNotVisible. Is this an undocumented limitation, a misunderstanding by me, or a misconfiguration by me?

Please let me know if I can provide any more details.

Better name for default serviceaccount and crb

The installation instructions tell you to install with the yaml files in the deploy directory. These will create a ServiceAccount called test-account and a ClusterRoleBlinding called test-binding. Once this project is fully ready for folks to use it, it would be better if the names were more specific, say, gke-managed-certs.

the server could not find the requested resource (get managedcertificates.networking.gke.io aa-example-certificate)

I am trying to automate our kubernetes deployments and trying to validate if things works. In that respect, create a managed certificate on gke using following yaml. The creation of cert is fine and it is active.

apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
  name: aa-example-certificate
spec:
  domains:
    - aa.xxx.io

Now, i am trying to access this using following code.

imported "github.com/GoogleCloudPlatform/gke-managed-certs/e2e/client"

Add following piece of code

        var clients *client.Clients
        clients, err := client.New("default")
        if err != nil {
            return err
        }
        mcrt, err := clients.ManagedCertificate.Get("aa-example-certificate")
        if err != nil {
            return err
        }
        fmt.Println(mcrt)

Getting following error
the server could not find the requested resource (get managedcertificates.networking.gke.io aa-example-certificate)

How to get cert info ?

Automatically delete certificates when cluster or ingress is shutdown

When shutting down a cluster the associated certificates with the ingresses are not being deleted and tend to accumulate. I figure this out when reaching the quota of 30 certs. No new ones were created and it took me time to realize it was a quota issue.

Does that make sense to add an option to the ingress object annotation to delete a cert whenever an ingress or the whole cluster is deleted?

Or maybe at the level of the GCP load balancer?

Managed certificate is ignored

Hello

I created fresh GKE cluster with version 1.12.6-gke.10.
Then I followed the howto : creating managedcertificate, service, ingress, external ip and DNS name all worked fine.
I also verified that domain name resolves to IP of the load balancer.

However after LB is created, nothing happens: kubectl describe managedcertificate shows 'Events: ' . LB is listening only on port 80.
Is there any way to debug this?

ManagedCertification is not created

Hello.

I deploy Managed Certificate custom object.
But Managed Certificate is not created.

My kubernetes version is following.
v1.12.7-gke.10

Deployed Managed Certificate and ingress is following.

apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
  name: cert
spec:
  domains:
    - test.hoge.com
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress
  annotations:
    kubernetes.io/ingress.global-static-ip-name: "ingress-ip"
    networking.gke.io/managed-certificates: "cert"
spec:
  backend:
    serviceName: test
    servicePort: 80

if you have already solution, please teach me.
Thank you.

Certificate provisioning stuck on FAILED_NOT_VISIBLE

I got the controller to see the ingress annotations and it issued a few certificates, however they are stuck in FAILED_NOT_VISIBLE for a few hours now. The DNS is controlled in the same GCP project by Cloud DNS and is resolvable and reachable publicly so I'm not sure what the issue might be. Any extra information that might help?

Doesn't detect annotation

Hi !

I'm trying to use gke-managed-certs. I following the documentation and when I deploy the controller I have this log

attempting to acquire leader lease kube-system/managed-certificate-controller...

Is it normal ?

Support: What auto installs this?

I'm getting bad performance in my cluster from this... It's assuming I use the Google ingress, have port 6443 open, have no cert-manager installed and configured to auto renew le certs and have unlimited google ssl quota... non of that is true.

Is this auto installed by Google? Is this auto installed by gitlab?

Rebase gke-certificate-controller image to distroless

As part of the effort described in KEP:Rebase images to distroless, we need to rebase gke-certificate-controller image to distroless.

The action items include:

  1. Replace glog to klog in the source code.
  2. Remove shell dependencies in Dockerfile:
    run.sh pipes the stdout and redirects the log to /var/log/managed_certificate_controller.log. After klog is applied, we can use klog flag "--log-file=/var/log/managed_certificate_controller.log" to replace the shell cmd and then no bash script is required.

Wildcard certificate

Hello ! Just one question support of wildcard certificate is it a feature envisaged?

Controller Hangs Acquiring Leader Lease

I've deployed the CRD, controller and created a ManagedCertificate but I see that the controller hangs while trying to acquire the leader lease. Are docs on how to debug what's going on?

 ❮❮❮ kubectl logs -f managed-certificate-controller-595455848-zll6h
I0515 14:54:04.495562       1 main.go:62] managed-certificate-controller v1.1.0 starting. Latest commit hash: 5be47971d44fad4dd4c7b25fa8374ac65171898c
I0515 14:54:04.495655       1 main.go:65] argv[0]: "/managed-certificate-controller"
I0515 14:54:04.495663       1 main.go:65] argv[1]: "-v=3"
I0515 14:54:04.495669       1 main.go:67] Flags = {APIServerHost: GCEConfigFilePath: KubeConfigFilePath: PrometheusAddress::8910}
I0515 14:54:04.499112       1 config.go:170] Using default TokenSource
I0515 14:54:04.499151       1 config.go:103] TokenSource: &oauth2.reuseTokenSource{new:google.computeSource{account:"", scopes:[]string(nil)}, mu:sync.Mutex{state:0, sema:0x0}, t:(*oauth2.Token)(nil)}, projectID: gadic-310112
W0515 14:54:04.499204       1 client_config.go:614] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0515 14:54:04.502203       1 leaderelection.go:243] attempting to acquire leader lease kube-system/managed-certificate-controller...

Adding domains to ManagedCertificate doesn't update the certificate

Adding a domain to the domains array in a ManagedCertificate object doesn't update the certificate, instead an error is logged (visible in kubectl get events):

system 0s Warning   BackendError managedcertificate/iap googleapi: Error 400: The ssl_certificate resource 'projects/project/global/sslCertificates/mcrt-uuid' is already being used by 'projects/project/global/targetHttpsProxies/k8s2-etc', resourceInUseByAnotherResource

My ManagedCertificate object (sans managedFields, etc):

apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
  name: iap
  namespace: system
spec:
  domains:
  - tool1.ops.company.internal
  - tool2.ops.company.internal
  - tool3.ops.company.internal
  - tool4.ops.company.internal
  - new-domain-not-working.ops.company.internal
status:
  certificateName: mcrt-uuid
  certificateStatus: Active
  domainStatus:
  - domain: tool1.ops.company.internal
    status: Active
  - domain: tool2.ops.company.internal
    status: Active
  - domain: tool3.ops.company.internal
    status: Active
  - domain: tool4.ops.company.internal
    status: Active
  expireTime: "2021-08-30T17:34:11.000-07:00"

The certificate is indeed in use, however I expected to be able to add a new domain to the certificate. IIRC, this worked a long while back, but I don't remember exactly when (unfortunately).

If this is intended I'd like to request an update to the controller that either rejects the attempt to add a domain to the list or shows the error in the ManagedCertificate status object.

Using ManagedCertificate with Gateway API

According to the Deploying Gateways post, I've been trying to deploy a Gateway using an existing ManagedCertificate.

I have used the ManagedCertificate with an Ingress before, so I assume it is configured correctly.

I then tried to reference it in the Gateway as follows:

apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
  name: {{ .Values.gateway.name }}
spec:
  gatewayClassName: gke-l7-gxlb
  listeners:
  - name: https
    protocol: HTTPS
    port: 443
    allowedRoutes:
      kinds:
      - kind: HTTPRoute
    tls:
      mode: Terminate
      options:
        networking.gke.io/pre-shared-certs: {{ .Values.certificates.managedCertName | quote }}
  addresses:
  - type: NamedAddress
    value: {{ .Values.gateway.staticIPName }}

which results in the Gateway logging the following event/error:

Warning  SYNC    24s (x5 over 8m26s)  sc-gateway-controller  failed to translate Gateway "default/{{managedCertName}}": Error GWCER105: Listener "https" is invalid, err: SslCertificate "global/sslCertificates/{{managedCertName}}" does not exist.                                                                                                                                                                          

Any ideas on how I can fix this?

PS: I also can't remove the Gateway as soon as it's deployed, but that might be a separate issue.

Ingress annotation fails

Steps to reproduce:

  1. Perform steps 1 and 2 in the instructions.
  2. Create a file for the the managed certificate custom object configuration.
  3. Create the managed certificate object from the aforementioned file with the command kubectl create -f my-managed-cert-object.yaml
  4. Run the command kubectl annotate ingress [your-ingress-name] gke.googleapis.com/managed-certificates [your-managed-object-name].

Expected Result::
Ingress is annotated.

Actual result:
kubectl throws the following error: error: at least one annotation update is required

"managed-certificate-role" is forbidden

Cannot deploy controller due to following error:

$ kubectl apply -f managed-certificate-controller.yaml

serviceaccount "managed-certificate-account" created
clusterrolebinding.rbac.authorization.k8s.io "managed-certificate-binding" created
deployment.apps "managed-certificate-controller" created
Error from server (Forbidden): error when creating "managed-certificate-controller-old.yaml": clusterroles.rbac.authorization.k8s.io "managed-certificate-role" is forbidden: attempt to grant extra privileges: [{[] [gke.googleapis.com] [managedcertificates] []
[]} {[
] [] [configmaps] [] []} {[] [] [events] [] []}] user=&{[email protected] [system:authenticated] map[user-assertion.cloud.google.com:[APTNk9TTsJ4paIpwW7+/0xgKOISypeb+QUmyw9+4nzABDyxYrW+nnCS+kSKxWi0+dPy65pNZUX2scM0nDAjMd1hgcyrtKFPxasWi1a+DEO
D9pslJgXAdTcNMS1d0/vqsZd8jTKBUCmUVy7MZl+vy6TR4eJkykvM/rcZnBjXj70IGbStbCE5GIc5ceg4fiiqhzPAZYUQitMTzTckytgGtNgCVBL3LSwr0jN8dX2wn4jNqTZyItA==]]} ownerrules=[{[create] [authorization.k8s.io] [selfsubjectaccessreviews selfsubjectrulesreviews] [] []} {[get] [] [] []
[/api /api/
/apis /apis/* /healthz /openapi /openapi/* /swagger-2.0.0.pb-v1 /swagger.json /swaggerapi /swaggerapi/* /version /version/]}] ruleResolutionErrors=[clusterroles.rbac.authorization.k8s.io "managed-certificate-role" not found]

If I continue and create certificate:

$ kubectl describe mcrt/my-certificate

Warning BackendError 7m (x24 over 16m) managed-certificate-controller googleapi: Error 403: Insufficient Permission, insufficientPermissions

Thx for your help

CRD group and version have changed

Tried running the controller as-is from deploy/ but it was initially complaining about

E0206 19:37:48.359527      10 reflector.go:134] github.com/GoogleCloudPlatform/gke-managed-certs/pkg/clientgen/informers/externalversions/factory.go:117: Failed to list *v1beta1.ManagedCertificate: managedcertificates.networking.gke.io is forbidden: User "system:serviceaccount:perimeter:managed-certificate-account" cannot list managedcertificates.networking.gke.io at the cluster scope

So I fixed RBAC, and then

I0206 19:41:18.214788      10 reflector.go:169] Listing and watching *v1beta1.ManagedCertificate from github.com/GoogleCloudPlatform/gke-managed-certs/pkg/clientgen/informers/externalversions/factory.go:117
E0206 19:41:18.217584      10 reflector.go:134] github.com/GoogleCloudPlatform/gke-managed-certs/pkg/clientgen/informers/externalversions/factory.go:117: Failed to list *v1beta1.ManagedCertificate: the server could not find the requested resource (get managedcertificates.networking.gke.io)

I then changed the group to networking.gke.io and version to v1beta1 in the CRD and it started working again. Might be worth double checking the upstream code for reflector.

Docker tags for older versions are deleted

Hi,

Today I noticed the managed certificate controller in one of our clusters was failing because it couldn't pull v0.3.0. I know it's not the most recent version but I would expect labeled versions to stick around for more than a few months (looks like it's ~4 months old).

Maybe tag retention can be changed so we can keep "released" versions longer? Or if not document how long we can expect tagged versions to stay around?

More than one mcrt gets created

I have a support ticket on this Case 19503956. Looks like in some case more than one mcrt resource get created. One of them ends up working, but then I have some stale extra stuff.


kubectl apply -f demo.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: demo-ingress-with-managed-cert
---
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
  name: example-certificate
  namespace: demo-ingress-with-managed-cert
spec:
  domains:
    - demo.example
---
apiVersion: v1
kind: Service
metadata:
  name: example-nodeport-service
  namespace: demo-ingress-with-managed-cert
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: example-ingress
  namespace: demo-ingress-with-managed-cert
  annotations:
    networking.gke.io/managed-certificates: example-certificate
spec:
  rules:
  - host: demo.example
    http:
      paths:
      - backend:
          serviceName: example-nodeport-service
          servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
  namespace: demo-ingress-with-managed-cert
spec:
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
        ports:
        - containerPort: 80
          name: nginx
          protocol: TCP

Inssuficent permission

Hi ! When I'm logging inside managed-cert pod I have this

googleapi: Error 403: Insufficient Permission: Request had insufficient authentication scopes.

Is it a bug or just miss configuration ?

Multiple Certs not working: Only picks up the first managedcert in the list

The following config for my ingress create a LB with only the first cert. If I swap them around, I get the other one:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.global-static-ip-name: om-static-ip
    networking.gke.io/managed-certificates: om-ssl-google-managed,om-no-www-ssl-google-managed
  name: om-prod-ssl
  namespace: default
spec:
  rules:
  - host: www.temp-om.simply.co.za
    http:
      paths:
      - backend:
          serviceName: om-tenandsix-prod
          servicePort: 8080
  - host: temp-om.simply.co.za
    http:
      paths:
      - backend:
          serviceName: om-tenandsix-prod
          servicePort: 8080

The resulting annotations copied from the ingress on cloud console:

ingress.gcp.kubernetes.io/pre-shared-cert: mcrt-7d7ac878-3a4f-4fe7-b23d-483813bb6ac0
ingress.kubernetes.io/backends: {"k8s-be-30009--4d15a37c4c5becdc":"HEALTHY","k8s-be-31353--4d15a37c4c5becdc":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-om-prod-ssl--4d15a37c4c5becdc
ingress.kubernetes.io/https-forwarding-rule: k8s-fws-default-om-prod-ssl--4d15a37c4c5becdc
ingress.kubernetes.io/https-target-proxy: k8s-tps-default-om-prod-ssl--4d15a37c4c5becdc
ingress.kubernetes.io/ssl-cert: mcrt-7d7ac878-3a4f-4fe7-b23d-483813bb6ac0
ingress.kubernetes.io/target-proxy: k8s-tp-default-om-prod-ssl--4d15a37c4c5becdc
ingress.kubernetes.io/url-map: k8s-um-default-om-prod-ssl--4d15a37c4c5becdc
kubernetes.io/ingress.global-static-ip-name: om-static-ip
networking.gke.io/managed-certificates: om-ssl-google-managed,om-no-www-ssl-google-managed```

Adding >15 certs to GKE ingress

I'm using managed certs (1 domain/cert) with a kubernetes ingress on GKE, and am running into the limit of not being able to add more than 15 certs to the ingress.

Is there a work around to point more than 15 domains at a GKE cluster or is that not possible using managed certs?

When using a different class of ingress, certificate status is `FailedNotVisible`

Running on Kubernetes 1.14.8-gke.12, with nginx-ingress-1.26.2, managed certificates failed. DNS are resolved, DNSSEC is working. If I use the default gce-ingress, it actually works.

cert.yaml

---
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
  name: www-certificate
spec:
  domains:
    - www.domain.se

ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: www-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    kubernetes.io/ingress.global-static-ip-name: www-domain-com # is a regional address for nginx
    networking.gke.io/managed-certificates: www-certificate
spec:
  rules:
    - host: www.domain.com
      http:
        paths:
          - path: /
            backend:
              serviceName: www
              servicePort: 8080

$ kubectl describe managedcertificates.networking.gke.io www-certificate                                            
Name:         www-certificate
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:  networking.gke.io/v1beta1
Kind:         ManagedCertificate
Metadata:
  Creation Timestamp:  2019-12-09T10:25:46Z
  Generation:          3
  Resource Version:    2967605
  Self Link:           /apis/networking.gke.io/v1beta1/namespaces/default/managedcertificates/root-nesta-se-cert
  UID:                 434f78d9-1a6e-11ea-816a-42010aa6014e
Spec:
  Domains:
    www.domain.om
Status:
  Certificate Name:    mcrt-55d0485c-dc0c-4796-8ec7-1af1d5aba472
  Certificate Status:  Provisioning
  Domain Status:
    Domain:  www.domain.com
    Status:  FailedNotVisible
Events:      <none>

ManagedCertificate has no status

I used below yaml content to create managed certificate and ingress:

apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: custom-int-certificate
namespace: default
spec:
domains:
- custom.lr.com

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-custom
annotations:
kubernetes.io/ingress.global-static-ip-name: "custom-integrations-api-address"
networking.gke.io/managed-certificates: custom-int-certificate
spec:
rules:
- http:
paths:
- path: /pss
backend:
serviceName: pss
servicePort: 80
- path: /healthz/pss
backend:
serviceName: pss
servicePort: 80

And "custom-int-certificate" and "ingress-custom" created successful, but it does not work! And when I describe "custom-int-certificate", found that it didn't have "status" param and "Events" is none, result is as below:
Name: custom-int-certificate
Namespace: default
Labels:
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"networking.gke.io/v1beta1","kind":"ManagedCertificate","metadata":{"annotations":{},"name":"custom-int-certificate","namesp...
API Version: networking.gke.io/v1beta1
Kind: ManagedCertificate
Metadata:
Creation Timestamp: 2019-05-22T06:24:08Z
Generation: 1
Resource Version: 1621347
Self Link: /apis/networking.gke.io/v1beta1/namespaces/default/managedcertificates/custom-int-certificate
UID: 3474d1e1-7c5a-11e9-9235-42010a800008
Spec:
Domains:
custom.lr.com
Events:

Is there any way to fix it? My GKE is 1.12.6-gke.10.

Failing to provision: FAILED_NOT_VISIBLE

Following https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs

I created an Ingress and 2 certs. I had a bunch of problems, and I don't know the exact sequence, so I'll just write it as a friction log

  • Created both certs (I thought, see below)
  • Create Ingress referencing them
  • Added DNS name -> Ingress
  • Ingress is up on HTTP
  • HTTPS failing, wait 20 minutes
  • HTTPS still failing
  • describe ingress shows it is up
  • describe managedcertificate shows last event "Create SslCertificate mcrt-..."
  • No HTTPS TargetProxy was created (seen in UI)
  • Cloud console UI for cert shows: FAILED_NOT_VISIBLE. In fact the k8s resource also shows this. But the domain is up and running.
  • Realize I messed up the certs and only applied 1 of the 2
  • Now 40 minutes later, the first cert is still provisioning with no events or updates
  • Delete both certs and start over - recreate them both
  • Still provisioning 20 minutes later
  • Still no HTTPS TargetProxy created
  • describe ingress shows nothing out of sorts
  • Finally after 20 minutes I get FAILED_NOT_VISIBLE again. Still no HTTPS TargetProxy

The DNS resolves properly. Accessing it over HTTP is hitting the correct backend and redirecting to HTTPS.

I am at a loss at this point.

How to use use a cert with TLS 1.0?

We have some old machines that connect to our backend through https and had a wildcard certificate in use before. Now we want to switch to a google managed certificate, but this requires to also have an SSL Policy in place in GCP that forces traffic to use TLS 1.2. How can I get a managed certificate that supports TLS 1.0?

Required 'compute.sslCertificates.get' permission

Hello,

I am getting the following error after deploying the managed certificate.

Running the following kubectl describe mcrt/my-cert shows the following error at the bottom.

Type     Reason        Age                From                            Message
  ----     ------        ----               ----                            -------
  Warning  BackendError  52s (x20 over 6m)  managed-certificate-controller  googleapi: Error 403: Required 'compute.sslCertificates.get' permission for 'projects/****/global/sslCertificates/mcrt-4b7f61f7-8645-4f21-873c-23130d13adec', forbidden

Unable to use TLS 1.3

The default certificate created uses TLS 1.2. Is there any way to use TLS 1.3?

TLS 1.3 is much faster and browser support is much better than before

Add a BIG warning saying most people don't need to follow Install instructions

It seems like Install instructions in the readme is for a very small % of the people trying to run "Kubernetes on GCE". Is that right?

If I'm on GKE, I can be easily misled by the readme in this repo and may try to apply CRD/controller myself –which is a bad idea. Can you please prevent this from happening.

A good way to achieve this could be:

  1. add a big warning saying people not to follow installation instructions and follow the docs instead
  2. move the install instructions out of root README.

Any plans to support DNS Authorization?

Hi - any plans to support DNS Authorization?
Load balancer authorization makes pre-provisioning of these certs for various migration scenarios more challenging.
Thanks.

FORBIDDEN error after certificate creation

I'm creating ingresses with managed certificates as in example https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs

I even still have one running on subdomain1.domain.com
I also have been successfully creating ingresses for the other sub-domains, but today I faced this problem.

kubectl describe managedcertificate -n web-app
Name:         web-app-certificate
Namespace:    web-app
Labels:       <none>
Annotations:  <none>
API Version:  networking.gke.io/v1beta1
Kind:         ManagedCertificate
Metadata:
  Creation Timestamp:  2020-01-13T19:39:37Z
  Generation:          2
  Resource Version:    2270
  Self Link:           /apis/networking.gke.io/v1beta1/namespaces/web-app/managedcertificates/web-app-certificate
  UID:                 6ea7a4bd-363c-11ea-840c-42010af00146
Spec:
  Domains:
// here's 
    subdomain2.domain.com
Status:
  Certificate Name:  mcrt-cfb380b2-0b2c-4deb-b264-1e5be4ad259a
  Domain Status:
Events:
  Type     Reason        Age                   From                            Message
  ----     ------        ----                  ----                            -------
  Warning  BackendError  6m9s                  managed-certificate-controller  operation operation-1578944378860-59c0aa2d274a8-547f28d8-6dacdff0 failed: FORBIDDEN
  Warning  BackendError  5m58s                 managed-certificate-controller  operation operation-1578944390237-59c0aa3800bc9-25ad682d-099f8de1 failed: FORBIDDEN
  Warning  BackendError  5m47s                 managed-certificate-controller  operation operation-1578944401176-59c0aa426f7b3-13685221-86c3432c failed: FORBIDDEN
  Warning  BackendError  5m44s                 managed-certificate-controller  operation operation-1578944404387-59c0aa457f52d-456340a9-ecc77fa4 failed: FORBIDDEN
  Warning  BackendError  5m36s                 managed-certificate-controller  operation operation-1578944412291-59c0aa4d092b9-f667224d-1470767b failed: FORBIDDEN
  Warning  BackendError  5m24s                 managed-certificate-controller  operation operation-1578944424029-59c0aa583ad65-b073f0c1-a547e6a6 failed: FORBIDDEN
  Warning  BackendError  5m13s                 managed-certificate-controller  operation operation-1578944435216-59c0aa62e6263-3a24c18d-fe24c347 failed: FORBIDDEN
  Warning  BackendError  5m1s                  managed-certificate-controller  operation operation-1578944446746-59c0aa6de4dbb-bb645422-cdeb522c failed: FORBIDDEN
  Warning  BackendError  4m49s                 managed-certificate-controller  operation operation-1578944458846-59c0aa796f19f-4fd9164f-c53f59d8 failed: FORBIDDEN
  Warning  BackendError  16s (x18 over 4m36s)  managed-certificate-controller  (combined from similar events): operation operation-1578944731319-59c0ab7d48df7-733c60e6-604c77bd failed: FORBIDDEN

Is there any chance to know more details beyond the FORBIDDEN?

CrashLoopBackoff w/ current build

The current build, docker-pullable://eu.gcr.io/managed-certs-gke/managed-certificate-controller@sha256:e9730fe05cb2827fa3c982c3ed08d3b4e7aeba2619d73dea3213b1aceb9b077a was crashing and I didn't see anything in the container logs. I reverted to a prior revision and had success again.

WWW non-WWW Redirection over HTTPS

Hi!

I'm trying to set up a web server at www.[domain].com. I followed the instructions on this article: https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs and got everything working.

There are four ways someone may commonly access my site:

http://[domain].com
http://www.[domain].com
https://[domain].com
https://www.[domain].com

Redirecting http://[domain].com to http://www.[domain].com is easily handled via DNS records. However, right now, running curl https://[domain].com gives an invalid cert error, since my certificate is only configured for www.[domain].com. For many browsers, this isn't an issue (eg, Chrome handled the redirect) but in Firefox for example, this redirect wasn't handled by my browser. How can I use managed certs to redirect https://[domain].com to https://www.[domain].com? According to Google's documentation: "Managed certificates support a single, non-wildcard domain. Refer to the managed certificates page for information on how to use them." Although I am relatively inexperienced in this field, this seems inconsistent with web best practices, which should use HTTPS and should redirect [domain].com to www.[domain].com or vise versa. Am I misunderstanding or misusing this service?

Thanks!

Availability in GKE ?

Hi,
Is this only available on Alpha clusters ? Any idea when it will be more broadly available on Beta/prod clusters?

Error with dependency

Hi, When I try to use the client package with go, there is an error with apimachinery

vendor/github.com/GoogleCloudPlatform/gke-managed-certs/pkg/clientgen/clientset/versioned/typed/networking.gke.io/v1beta1/networking.gke.io_client.go:74:32: undefined: serializer.DirectCodecFactory
I use the master branch of apimachinery
Which version of apimachinery do you use?

Support more then one domain

spec.domains should support a list of domains since it doesn't support wildcard domains.

Right now it ends up in an error:

spec.domains in body should have at most 1 items

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.