Code Monkey home page Code Monkey logo

charts's Introduction

Fairwinds Charts

CircleCI Fairwinds Insights

A repository of Helm charts. Modelled after https://github.com/helm/charts

Testing

All charts are linted and tested using Helm Chart Testing

Generating docs

Fairwinds charts are using helm-docs for automating the generation of docs. Before pushing your changes, run helm-docs --sort-values-order=file - this will add new values together with their documentation to the README of the chart. Ideally document the values via comments inside the values file itself - those comments will end up in the README as well.

Linting

Charts are linted using both the helm lint command and against the schema. This ensures that maintainers, versions, etc. are included.

e2e Testing

Charts are installed into a kind cluster. You can provide a folder called ci with a set of *-values.yaml files to provide overrides for the e2e test.

If you have any prerequisites to a chart install that cannot be performed by helm itself (e.g. manually installing CRDs from a remote location) you can place a shell (not bash) script in the ci folder of your chart. The script should be exactly named: pre-test-script.sh

Usage

To install a chart from this repo, you can add it as a helm repository

helm repo add fairwinds-stable https://charts.fairwinds.com/stable
helm search repo fairwinds-stable

Organization

Stable

These charts are considered stable for public consumption and use. See the criteria in the contributing document.

Incubator

These charts are considered alpha or beta and are not intended for public consumption outside of Fairwinds. They are frequently for very specific use-cases and can be broken at any time without warning. There are absolutely no guarantees in this folder.

Scripts

This folder includes scripts for testing the charts and syncing the repo.

Join the Fairwinds Open Source Community

The goal of the Fairwinds Community is to exchange ideas, influence the open source roadmap, and network with fellow Kubernetes users. Chat with us on Slack join the user group to get involved!

Love Fairwinds Open Source? Share your business email and job title and we'll send you a free Fairwinds t-shirt!

Other Projects from Fairwinds

Enjoying Charts? Check out some of our other projects:

  • Polaris - Audit, enforce, and build policies for Kubernetes resources, including over 20 built-in checks for best practices
  • Goldilocks - Right-size your Kubernetes Deployments by compare your memory and CPU settings against actual usage
  • Pluto - Detect Kubernetes resources that have been deprecated or removed in future versions
  • rbac-manager - Simplify the management of RBAC in your Kubernetes clusters

charts's People

Contributors

azahorscak avatar bambash avatar bbensky avatar bodgit avatar coreypobrien avatar crmejia avatar davekonopka avatar davidblum avatar dosullivan avatar fodoj avatar ivanfetch avatar ivanfetch-wt avatar jdesouza avatar joshuastern avatar jslivka avatar lucasreed avatar makoscafee avatar mhoss019 avatar mikutas avatar murali-krishna avatar out-of-band avatar rbren avatar reactiveops-bot avatar robscott avatar sammc3 avatar sudermanjr avatar thecubiclejockey avatar transient1 avatar vitorvezani avatar zehenrique avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

charts's Issues

Error creating: pods "goldilocks-vpa-install-" is forbidden: error looking up service account default/goldilocks-vpa-install: serviceaccount "goldilocks-vpa-install" not found

I get the following error with helm:
Error creating: pods "goldilocks-vpa-install-" is forbidden: error looking up service account default/goldilocks-vpa-install: serviceaccount "goldilocks-vpa-install" not found

When i looked into the code i see the following:


1 | apiVersion: batch/v1 |   |  
-- | -- | -- | --
2 | kind: Job |   |  
3 | metadata: |   |  
4 | annotations: |   |  
5 | helm.sh/hook: 'pre-install,pre-upgrade' |   |  
6 | helm.sh/hook-delete-policy: 'hook-succeeded,before-hook-creation' |   |  
7 | helm.sh/hook-weight: '-70' |   |  
8 | kubectl.kubernetes.io/last-applied-configuration: > |   |  
9 | {"apiVersion":"batch/v1","kind":"Job","metadata":{"annotations":{"helm.sh/hook":"pre-install,pre-upgrade","helm.sh/hook-delete-policy":"hook-succeeded,before-hook-creation","helm.sh/hook-weight":"-70"},"labels":{"app.kubernetes.io/component":"vpa-install","app.kubernetes.io/instance":"goldilocks","app.kubernetes.io/managed-by":"Tiller","app.kubernetes.io/name":"goldilocks","helm.sh/chart":"goldilocks-2.3.1"},"name":"goldilocks-vpa-install","namespace":"default"},"spec":{"template":{"metadata":{"labels":{"app.kubernetes.io/component":"vpa-install","app.kubernetes.io/instance":"goldilocks","app.kubernetes.io/managed-by":"Tiller","app.kubernetes.io/name":"goldilocks","helm.sh/chart":"goldilocks-2.3.1"},"name":"goldilocks-vpa-install"},"spec":{"containers":[{"args":["-c","kubectl |   |  
10 | apply -f |   |  
11 | https://raw.githubusercontent.com/kubernetes/autoscaler/e16a0adef6c7d79a23d57f9bbbef26fc9da59378/vertical-pod-autoscaler/deploy/recommender-deployment.yaml\nkubectl |   |  
12 | apply -f |   |  
13 | https://raw.githubusercontent.com/kubernetes/autoscaler/e16a0adef6c7d79a23d57f9bbbef26fc9da59378/vertical-pod-autoscaler/deploy/vpa-beta2-crd.yaml\nkubectl |   |  
14 | apply -f |   |  
15 | https://raw.githubusercontent.com/kubernetes/autoscaler/e16a0adef6c7d79a23d57f9bbbef26fc9da59378/vertical-pod-autoscaler/deploy/vpa-rbac.yaml\n"],"command":["bash"],"image":"quay.io/reactiveops/ci-images:v9-alpine","name":"vpa-install"}],"restartPolicy":"Never","serviceAccountName":"goldilocks-vpa-install"}}}} |   |  
16 | labels: |   |  
17 | app.kubernetes.io/component: vpa-install |   |  
18 | app.kubernetes.io/instance: goldilocks |   |  
19 | app.kubernetes.io/managed-by: Tiller |   |  
20 | app.kubernetes.io/name: goldilocks |   |  
21 | helm.sh/chart: goldilocks-2.3.1 |   |  
22 | name: goldilocks-vpa-install |   |  
23 | namespace: default |   |  
24 | resourceVersion: '13458861' |   |  
25 | selfLink: /apis/batch/v1/namespaces/default/jobs/goldilocks-vpa-install |   |  
26 | uid: e8692b6c-64db-41a2-86c8-8f5493210ee7 |   |  
27 | spec: |   |  
28 | backoffLimit: 6 |   |  
29 | completions: 1 |   |  
30 | parallelism: 1 |   |  
31 | selector: |   |  
32 | matchLabels: |   |  
33 | controller-uid: e8692b6c-64db-41a2-86c8-8f5493210ee7 |   |  
34 | template: |   |  
35 | metadata: |   |  
36 | creationTimestamp: null |   |  
37 | labels: |   |  
38 | app.kubernetes.io/component: vpa-install |   |  
39 | app.kubernetes.io/instance: goldilocks |   |  
40 | app.kubernetes.io/managed-by: Tiller |   |  
41 | app.kubernetes.io/name: goldilocks |   |  
42 | controller-uid: e8692b6c-64db-41a2-86c8-8f5493210ee7 |   |  
43 | helm.sh/chart: goldilocks-2.3.1 |   |  
44 | job-name: goldilocks-vpa-install |   |  
45 | name: goldilocks-vpa-install |   |  
46 | spec: |   |  
47 | containers: |   |  
48 | - args: |   |  
49 | - '-c' |   |  
50 | - > |   |  
51 | kubectl apply -f |   |  
52 | https://raw.githubusercontent.com/kubernetes/autoscaler/e16a0adef6c7d79a23d57f9bbbef26fc9da59378/vertical-pod-autoscaler/deploy/recommender-deployment.yaml |   |  
53 |   |   |  
54 | kubectl apply -f |   |  
55 | https://raw.githubusercontent.com/kubernetes/autoscaler/e16a0adef6c7d79a23d57f9bbbef26fc9da59378/vertical-pod-autoscaler/deploy/vpa-beta2-crd.yaml |   |  
56 |   |   |  
57 | kubectl apply -f |   |  
58 | https://raw.githubusercontent.com/kubernetes/autoscaler/e16a0adef6c7d79a23d57f9bbbef26fc9da59378/vertical-pod-autoscaler/deploy/vpa-rbac.yaml |   |  
59 | command: |   |  
60 | - bash |   |  
61 | image: 'quay.io/reactiveops/ci-images:v9-alpine' |   |  
62 | imagePullPolicy: IfNotPresent |   |  
63 | name: vpa-install |   |  
64 | resources: {} |   |  
65 | terminationMessagePath: /dev/termination-log |   |  
66 | terminationMessagePolicy: File |   |  
67 | dnsPolicy: ClusterFirst |   |  
68 | restartPolicy: Never |   |  
69 | schedulerName: default-scheduler |   |  
70 | securityContext: {} |   |  
71 | serviceAccount: goldilocks-vpa-install |   |  
72 | serviceAccountName: goldilocks-vpa-install |   |  
73 | terminationGracePeriodSeconds: 30 |   |  
74 | status: |   |  
75 | active: 1 |   |  
76 | failed: 4 |   |  
77 | startTime: '2020-06-05T19:57:26Z'

Line 52, 55 and 58 has tabs, that really strange.

But after fixing those tabs, I get the following error message:

The Job "goldilocks-vpa-install" is invalid:

  • spec.template.metadata.labels[controller-uid]: Invalid value: map[string]string{"app.kubernetes.io/component":"vpa-install", "app.kubernetes.io/instance":"goldilocks", "app.kubernetes.io/managed-by":"Tiller", "app.kubernetes.io/name":"goldilocks", "controller-uid":"e8692b6c-64db-41a2-86c8-8f5493210ee7", "helm.sh/chart":"goldilocks-2.3.1", "job-name":"goldilocks-vpa-install"}: must be '0d8e2040-05af-4e71-a3d4-573382279110'
  • spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"controller-uid":"e8692b6c-64db-41a2-86c8-8f5493210ee7"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: selector not auto-generated

[Insights Agent] Many API version warnings thrown

Describe the bug
Installation of Insights chart results in a series of warnings about deprecated api versions:

W0526 14:43:48.356770   44884 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
W0526 14:43:51.313009   44884 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
W0526 14:44:01.902919   44884 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
W0526 14:44:02.681869   44884 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
W0526 14:44:37.498849   44884 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
W0526 14:44:44.142957   44884 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
W0526 14:44:44.260983   44884 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
W0526 14:44:46.332647   44884 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
W0526 14:44:46.444328   44884 warnings.go:70] rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding

To Reproduce
Steps to reproduce the behavior:

  1. Connect to cluster of version > 1.17
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-11T13:17:17Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.8", GitCommit:"fd5d41537aee486160ad9b5356a9d82363273721", GitTreeState:"clean", BuildDate:"2021-02-17T12:33:08Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
  1. Execute command:
helm upgrade --install insights-agent fairwinds-stable/insights-agent \
  --version "1.12.*" \
  --create-namespace \
  --namespace insights-agent \
  --set goldilocks.enabled="true" \
  --set kubebench.enabled="true" \
  --set kubehunter.enabled="true" \
  --set kubesec.enabled="true" \
  --set nova.enabled="true" \
  --set pluto.enabled="true" \
  --set polaris.enabled="true" \
  --set rbacreporter.enabled="true" \
  --set trivy.enabled="true" \
  --set insights.organization="<redacted>" \
  --set insights.cluster="<redacted>" \
  --set insights.base64token="<redacted>"  \
  --wait

Expected behavior
I expect no warnings or errors from the installation when it is successful

CLI Output
See Description above

Environment (please complete the following information):

▶ k version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-11T13:17:17Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.8", GitCommit:"fd5d41537aee486160ad9b5356a9d82363273721", GitTreeState:"clean", BuildDate:"2021-02-17T12:33:08Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}

▶ helm version
version.BuildInfo{Version:"v3.5.4", GitCommit:"1b5edb69df3d3a08df77c9902dc17af864ff05d1", GitTreeState:"clean", GoVersion:"go1.15.11"}

Additional context
I recommend using Capabilities in the helm charts from here
or based on this example

[rbac-manager] Creation of CRDs should be optional

Is your feature request related to a problem? Please describe.
The Helm documentation states, that Helm 3 does not support upgrading CRDs.
Helm suggests either using the special crds folder of Helm 3 or managing the CRDs outside the chart.
In either case it would be great if the chart supported optional creation of the CRDs.

Describe the solution you'd like
Make the creation of CRDs optional (for backwards compatibility this could be enabled by default).
Also move the CRDs to the Helm3 crds folder so standard mechanisms like helm install --skip-crds work.

Would you accept a PR with these changes?

[tests] Fail immediately if any chart fails the e2e tests

Is your feature request related to a problem? Please describe.
When I change multiple charts, and one of them fails, the tests still continue for each changed chart

Describe the solution you'd like
Fail immediately

Describe alternatives you've considered

Additional context
Most of our PRs are just one chart, so not a huge deal, but larger PRs are a pain to debug, as you have to scroll through heaps of CircleCI logs

[rbac-manager] Unable to install with Helm 3

Describe the bug
When installing fairwinds-stable/rbac-manager using helm3, validation fails with the following error:
Error: validation: chart.metadata is required

To Reproduce
Steps to reproduce the behavior:

  1. helm install fairwinds-stable/rbac-manager rbac-manager --namespace rbac-manager

Expected behavior
Should install the chart as intended.

CLI Output
Error: validation: chart.metadata is required

Environment (please complete the following information):

  • Helm Version: 3.1.2
  • Kubernetes Version 1.18.0

Additional context
I can't find much information about Helm 3 support in this repository, but I can see that you test for helm 3 support. Am I missing something?

e2e tests all charts every time

When making a branch with a new chart, the e2e tests run on every single chart. This makes the test unnecessarily long. It should just run for the chart in the changeset.

[rbac-manager] Hard-coded runAsUser parameter is incompatible with Openshift 4

Describe the bug
I cannot install rbac-manager using the helm chart. The deployment tries to start pods and fails:

Error creating: pods "rbac-manager-78cbdd9fdb-" is forbidden: unable to validate against any security context constraint: [spec.containers[0].securityContext.securityContext.runAsUser: Invalid value: 1200: must be in the ranges: [1000570000, 1000579999]]

To Reproduce
helm repo add fairwinds-stable https://charts.fairwinds.com/stable
helm install fairwinds-stable/rbac-manager --name rbac-manager --namespace rbac-manager
kubectl describe rs -n rbac-manager

Expected behavior
Pods would start up and run as any user in the acceptable range, or I would be instructed to create a service account and add it to a securityContextConstraint using oc adm policy add-scc-to-user X.

Environment (please complete the following information):

  • Helm Version: v3.1.1
  • Kubernetes Version v1.16.2 (Openshift 4.3.8)

rbac-manager fails when image pulled from private repo

Describe the bug
getting error :getting Error: UPGRADE FAILED: error validating “”: error validating data: ValidationError(Deployment.spec.template.spec.containers[0]): unknown field “imagePullSecrets” in io.k8s.api.core.v1.Container" when install from private (seen it in 1.5.4,1.5.5,1.5.6)

To Reproduce
any install of rbac-manager using private image repo

possible fix is the indentation of imagePullSecrets has to be in line with the container spec as shown here(https://kubernetes.io/docs/concepts/containers/images/).

capsize has outdated specs

Describe the bug
Installing Capsize results in an error.

To Reproduce
Steps to reproduce the behavior:

  1. run helm install capsize fairwinds-incubator/capsize

Expected behavior
The chart installs successfully

CLI Output

Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: [ValidationError(CronJob.spec.jobTemplate.spec.template.spec): unknown field "metadata" in io.k8s.api.core.v1.PodSpec, ValidationError(CronJob.spec): unknown field "restartPolicy" in io.k8s.api.batch.v1beta1.CronJobSpec]

Environment (please complete the following information):

  • Helm Version: 3.0.0
  • Kubernetes Version 1.15.7

[goldilocks] Add Route support for Openshift

Is your feature request related to a problem? Please describe.
For this to work on Openshift we need Route support along with Ingress support

Describe the solution you'd like
A Route block just like ingress so that it can be enabled for Openshift

Describe alternatives you've considered
N/A

Additional context
N/A

[rbac-manager] update chart to use non-deprecated api version for CRD

Is your feature request related to a problem? Please describe.
The CRD definition apiextensions.k8s.io/v1beta1 is deprecated in Kubernetes 1.16, yet it is still in use by the rbac-manager helm chart.

Describe the solution you'd like
Change apiextensions.k8s.io/v1beta1 to apiextensions.k8s.io/v1 on the first line of the CRD.
https://github.com/FairwindsOps/charts/blob/master/stable/rbac-manager/templates/customresourcedefinition.yaml#L1

[vpa] Support HTTPS proxy in certgen Job

Is your feature request related to a problem? Please describe.

Currently when admissionController.enabled=true is enabled, by default certificates will generated as a pre-install/pre-upgrade hook in admission-controller-certgen.yaml.

The {{ include "vpa.fullname" . }}-certgen Job requires public network access to install packages. In environments where public access is not allowed and a HTTPS proxy must be used the Job fails:

fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.10/main: network error (check Internet connection and firewall)
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
ERROR: http://dl-cdn.alpinelinux.org/alpine/v3.10/community: network error (check Internet connection and firewall)
(1/1) Installing openssl (1.1.1i-r0)
ERROR: openssl-1.1.1i-r0: network error (check Internet connection and firewall)
1 error; 81 MiB in 39 packages

Describe the solution you'd like

The ability to pass in environment variables and/or additional arguments to the Job's container spec would allow https_proxy=... to be specified to overcome this.

[rbac-manager] Add ServiceMonitor for prometheus scraping

Is your feature request related to a problem? Please describe.
I would like the helm chart for rbac-manager to include a headless service and prometheus operator servicemonitor to make the full integration experience with rbac-manager and monitoring smoother. Currently we install rbac-manager via helm and then have to apply a service and servicemonitor manifest with the correct labels. As we don't control all of the labels used by the helm chart, any updates to these labels can result in needing to also update the service and servicemonitor.

I am happy to provide a PR with functionality if this is something that would be accepted. I have found that the inclusion of these varies per OSS project (grafana, KIAM, and external-dns to name a few that do) while others choose not to as a ServiceMonitor is not a native k8s construct.

Assuming this is something that would be approved are the following additions to the values.yaml acceptable:

monitoring:
  enabled: <true|false(default)>
  namespace: <"" | namespace> # defaults to "" / namespace the rbac-manager is installed into.  however some like to keep the servicemonitors in a different namespace>
  interval: 60s 
  # normally path and port are customizable here as well, but it appears you have the port and paths currently hard coded to 8080 and /metrics so I would leave those as they are.  

Please let me know. Thanks,
Ryan

incubator/ro-cert-manager race condition with cert-manager chart

When this chart is installed too quickly after versions 0.6+ of the cert-manager chart, the cert-manager validating webhook may not yet be initialized, making cert-manager incapable of creating its Clusterissuer objects.

The current work-around is:

  • Set the cert-manager.enabled value to false for this chart
  • Install the cert-manager chart separately.
  • Verify that cert-manager is capable of creating ClusterIssuer objects, instead of returning an error that the validating webhook can not satisfy the request.
  • Install this chart.

A potential solution is using a Helm pre-install hook to create a Kubernetes job that wil run a shell loop to verify ClusterIssuer objects can be created by cert-manager, before this chart attempts to create them.

[vpa] Allow specifying the image for the certgen container

The image used for certgen is hardcoded into the chart (https://github.com/FairwindsOps/charts/blob/master/stable/vpa/templates/admission-controller-certgen.yaml#L64)

We try to pull all images for our cluster from a centralized repository, so we sync public images into one place. Since we cannot change the image for the certgen used in this chart we have to add a special rule to allow pulling it from the public repo. Please add a configuration option to the chart to override this image

[letsencrypt-setup] validation failure on spec.acme.solvers.http01.ingress

Describe the bug
In the letsencrypt-setup helm chart, if you set either clusterIssuers.primary.solvers.http.enabled or clusterIssuers.selfsigned.solvers.http.enabled to true, but you do not name a specific ingress class in the corresponding clusterIssuers.*.solvers.http.ingressClass parameter, helm will produce an error stating

Error: ClusterIssuer.certmanager.k8s.io "letsencrypt-selfsigned" is invalid: []: Invalid value: map[string]interface {}{"apiVersion":"certmanager.k8s.io/v1alpha1", "kind":"ClusterIssuer", "metadata":map[string]interface {}{"creationTimestamp":"2019-11-26T15:06:49Z", "generation":3, "labels":map[string]interface {}{"chart":"letsencrypt-setup", "heritage":"Tiller", "release":"letsencrypt-setup"}, "name":"letsencrypt-selfsigned", "resourceVersion":"677139", "uid":"5eef9e30-105e-11ea-b735-0ad2a8e3f013"}, "spec":map[string]interface {}{"acme":map[string]interface {}{"email":"<email-name>@fairwinds.com", "privateKeySecretRef":map[string]interface {}{"name":"letsencrypt-setup-selfsigned-private-key"}, "server":"https://acme-staging-v02.api.letsencrypt.org/directory", "solvers":[]interface {}{map[string]interface {}{"http01":map[string]interface {}{"ingress":interface {}(nil)}}}}}, "status":map[string]interface {}{"acme":map[string]interface {}{"lastRegisteredEmail":"<email-name>@fairwinds.com", "uri":"https://acme-staging-v02.api.letsencrypt.org/acme/acct/123456789"}, "conditions":[]interface {}{map[string]interface {}{"lastTransitionTime":"2019-11-26T15:06:56Z", "message":"The ACME account was registered with the ACME server", "reason":"ACMEAccountRegistered", "status":"True", "type":"Ready"}}}}: validation failure list:
spec.acme.solvers.http01.ingress in body must be of type object: "null"```

**Expected behavior**
If `clusterIssuers.*.solvers.http.ingressClass` isn't set to a specific ingress class, the cluster issuer should be available to http01 challenges on any ingress class. 


**Environment (please complete the following information):**
 - Helm Version: 2.14.3
 - Kubernetes Version 1.14.8

[Chart Automation] Move to e2e testing that is setup by Rok8s CircleCI Orb.

The rok8s-scripts orb offers a very streamlined and maintained setup of a Kind cluster. In theory, we should be able to port the testing here over to that setup and still use the chart-test repo.

This would eliminate about 3/4 of the bash that we have hiding in this repo for setting up e2e tests, and make it easier to run e2e tests against multiple versions of Kube.

e2e tests not using `ci/*-values.yaml`

It would seem that the e2e tests are not being run properly. Looking at the fluentd chart, there only seems to be one test run even though there are 3 test values files.

Polaris: Support for custom priority class name

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
We should have a way to specify the priority for the pod by means of priorityClassName for polaris-dashboard Kubernetes deployment.

Describe the solution you'd like
A clear and concise description of what you want to happen.
A property in values.yaml like 'priorityClassName' that can take the name of existing priority class that users have created.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

FIx CODEOWNERS

The sub-folders for code ownership are not working as intended.

[goldilocks] Make securityContext for pods configurable

Is your feature request related to a problem? Please describe.
It is always good to have non-root/1000 user but in some cases (Openshift) the uid for certain applications in a namespace is restricted (e.g. user 11111-2222 can run containers in namespace goldilocks-namespace). In these scenarios the runAsUser needs to be configured for security purposes.

Describe the solution you'd like
If SecurityContext is not defined use the default, otherwise use what is specified.

Describe alternatives you've considered
N/A

Additional context
N/A

[vpa] Component specific podAnnotations do not work

Describe the bug
Though values files have .podAnnotations specified, the templates only use .Values.podAnnotations which is incorrect

Expected behavior
The individual podAnnotations should be usable on the pods

Environment (please complete the following information):

  • Helm Version: [3.2.4]
  • Kubernetes Version [1.18.16]

[insights-agent] [goldilocks] Support windows nodes

Is your feature request related to a problem? Please describe.
The VPA install process relies on ci-images, which struggles on Windows nodes. We've seen reports of the following error:

 Warning Failed         16m                  kubelet, akswpool100000h Failed to pull image "quay.io/reactiveops/ci-images:v9-alpine": rpc error: code = Unknown desc = failed to register layer: re-exec error: exit status 1: output: ProcessBaseLayer \\?

Describe the solution you'd like
The ci-images image is probably more than we need for the VPA installer. We could try and find or build a more pared-down image that is windows-friendly.

Describe alternatives you've considered

  • Not supporting Windows 🤷‍♂
  • Fixing the mysterious issue with ci-images

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.