Code Monkey home page Code Monkey logo

helm-charts's Introduction

Paralus

codeql helm go license OpenSSF Best Practices

Paralus is a free, open source tool that enables controlled, audited access to Kubernetes infrastructure for your users, user groups, and services. Ships as a GUI, API, and CLI. We are a CNCF Sandbox project

Paralus can be easily integrated with your pre-existing RBAC configuration and your SSO providers, or Identity Providers (IdP) that support OIDC (OpenID Connect). Through just-in-time service account creation and fine-grained user credential management, Paralus provides teams with an adaptable system for guaranteeing secure access to resources when necessary, along with the ability to rapidly identify and respond to threats through dynamic permission revocation and real time audit logs.

Kubernetes Goat

Features

  • Creation of custom roles, users, and groups.
  • Dynamic and immediate changing and revoking of permissions.
  • Ability to control access via pre-configured roles across clusters, namespaces, projects, and more.
  • Seamless integration with Identity Providers (IdPs) allowing the use of external authentication engines for users and group definitions, such as GitHub, Google, Azure AD, Okta, and others.
  • Automatic logging of all user actions performed for audit and compliance purposes.
  • Interact with Paralus either with a modern web GUI (default), a CLI tool called pctl, or Paralus API.

Kubernetes Goat

Getting Started

Installing and setting up Paralus takes less time than it takes to brew a (good) cup of coffee! You'll find the instructions here:

🤗 Community & Support

  • Check out the Paralus website for the complete documentation and helpful links.
  • Join our Slack workspace to get help and to discuss features.
  • Tweet @paralus_ on Twitter.
  • Create GitHub Issues to report bugs or request features.
  • Join our Paralus Community Meeting where we share the latest project news, demos, answer questions, and triage issues. Add to your calendar by importing ics file.
    • 🗓️ 2nd and 4th Tuesday
    • ⏰ 20:30 IST | 10:00 EST | 07:00 PST
    • 🔗 Zoom
    • 🗒️ Meeting minutes

Participation in Paralus project is governed by the CNCF Code of Conduct.

Contributing

We 💖 our contributors! Have a look at our contributor guidelines to get started.

If you’re looking to add a new feature or functionality, create a new Issue.

You're also very welcome to look at the existing issues. If there’s something there that you’d like to work on help improving, leave a quick comment and we'll go from there!

Authors

This project is maintained & supported by Rafay. Meet the maintainers of Paralus.

helm-charts's People

Contributors

akshay196 avatar akshay196-rafay avatar elenalape avatar joibel avatar mabhi avatar mcfearsome avatar meain avatar nirav-rafay avatar niravparikh05 avatar nlamirault avatar plejik avatar rustiever avatar vivekhiwarkar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

helm-charts's Issues

Warning "coalesce.go:199: warning" at a time of installing ztka chart

The following warnings showing while installing or upgrading ztka chart.

coalesce.go:199: warning: cannot overwrite table with non table for extraInitContainers (map[])
coalesce.go:199: warning: cannot overwrite table with non table for extraContainers (map[])
coalesce.go:199: warning: cannot overwrite table with non table for extraInitContainers (map[])
coalesce.go:199: warning: cannot overwrite table with non table for extraContainers (map[])
coalesce.go:199: warning: cannot overwrite table with non table for extraInitContainers (map[])
coalesce.go:199: warning: cannot overwrite table with non table for extraContainers (map[])

See how to get rid of these warning.

Unable to deploy helm chart due to connection_uri being set to `true`

Expected vs actual behavior

The values.yaml has connection_uri set to a boolean value which breaks the helm chart update as a result. Apparently it is because it uses a secret value from the secrets yaml file, but secrets.yaml file still expects to pull in a value that can be converted to base64. A boolean cannot.

The problematic area is here:

  kratos:
    development: false
    config:
      version: v0.10.1
      courier:
        smtp:
          # As per Kratos configuration it should string, but Kratos
          # helm chart is using it only as a flag to set
          # COURIER_SMTP_CONNECTION_URI variable. Actual value is
          # taken from kratos Secret.
          connection_uri: true

Which results in this error message in flux:

group":"helm.toolkit.fluxcd.io","reconciler kind":"HelmRelease","name":"paralus","namespace":"paralus","error":"Helm upgrade failed: template: ztka/charts/kratos/templates/statefulset-mail.yaml:44:12: executing \"ztka/charts/kratos/templates/statefulset-mail.yaml\" at <include \"kratos.annotations.checksum\" .>: error calling include: template: ztka/charts/kratos/templates/_helpers.tpl:184:28: executing \"kratos.annotations.checksum\" at <include (print $.Template.BasePath \"/secrets.yaml\") .>: error calling include: template: ztka/charts/kratos/templates/secrets.yaml:21:76: executing \"ztka/charts/kratos/templates/secrets.yaml\" at <b64enc>: wrong type for value; expected string; got bool"}

It needs to be a string or else the b64 call will fail.

Steps to reproduce the bug

  1. Attempt to run the 0.2.1 helm chart

Are you using the latest version of the project?

I was unable to install the latest due to this error

You can check your version by running helm ls|grep '^<deployment-name>' or using pctl, pctl version, and provide the output.

  • 0.2.0 chart

What is your environment setup? Please tell us your cloud provider, operating system, and include the output of kubectl version --output=yaml and helm version. Any other information that you have, eg. logs and custom values, is highly appreciated!

  • k8s on EKS

(optional) If you have ideas on why the bug happens or how it can be solved, please provide it here

Workaround is to actually set the value. Since I was using a secret name as well, I also had to override the smtpConnectionURI secret.

  • [X ] I've described the bug, included steps to reproduce it, and included my environment setup with all customizations.
  • I'm using the latest version of the project. -- no because of this issue

Support different annotations on an Ingress basis

Briefly describe the feature

Currently, the Paralus can be exposed using an Ingress resource, especially:

  • ingress-ztka.yaml, which is serving the TLS/SSL traffic
  • ingress-console.yaml, which is serving the Dashboard

It would be beneficial if there's the chance to specify specific annotations for each object, rather than having a global set.

What problem does this feature solve? Please link any relevant documentation or Issues

I'm using HAProxy Tech Ingress Controller and the first Ingress object requires a special annotation to offload the TLS/SSL traffic to the pod, instead, the Dashboard one doesn't need this.

However, the Helm Chart doesn't allow specifying specific annotations for each object.

(optional) What is your current workaround?

Manually patching the object upon Helm apply, this doesn't scale approaching a GitOps principle which reapplies the missing annotations.

Postgresql password not being properly output in the DSN

Expected vs actual behavior

I expect the DSN in secrets to include the configured password.

<no value> is output where the password should be.

Steps to reproduce the bug

  1. disable the embedded postgresql install .Values.deploy.postgresql.enable=false
  2. configure the required postgresql variables for building the DSN
  3. run helm template . and inspect the paralus-db secret.

Are you using the latest version of the project?

  chart:
    spec:
      chart: ztka
      version: "~> 0.1.0"
      sourceRef:
        kind: HelmRepository
        name: paralus

Notes

Will open a PR fixing this issue but it also contains other changes to allow the explicit setting of the postgresql DSN

  • I've described the bug, included steps to reproduce it, and included my environment setup with all customizations.
  • I'm using the latest version of the project.

ztka ingress missing ssl cert reference

Expected vs actual behavior

ztka ingress yaml doesn't offer the TLS option like the general console ingress. This results in issues with clients talking to relay server:

{"level":"info","ts":"2023-02-28T20:05:52.860Z","caller":"tunnel/client.go:416","msg":"Relay Agent.Client.paralus-core-relay-agent::dial failed network: tcp  addr: 6dfa6800-ac76-4f6b-9635-ac99a7e5b492.core-connector.paralus.iherbpreprod.net:443  err: x509: certificate has expired or is not yet valid: current time 2023-02-28T20:05:52Z is after 2020-09-18T12:00:00Z  "}
{"level":"info","ts":"2023-02-28T20:05:52.860Z","caller":"tunnel/client.go:424","msg":"Relay Agent.Client.paralus-core-relay-agent::action dial out network: tcp  addr: 6dfa6800-ac76-4f6b-9635-ac99a7e5b492.core-connector.paralus.iherbpreprod.net:443  "}
{"level":"info","ts":"2023-02-28T20:05:52.860Z","caller":"tunnel/client.go:460","msg":"Relay Agent.Client.paralus-core-relay-agent::action backoff sleep: 10.224562499s  address: 6dfa6800-ac76-4f6b-9635-ac99a7e5b492.core-connector.paralus.iherbpreprod.net:443  "}

Steps to reproduce the bug

  1. Enable ingress tls
  2. Bootstrap a cluster

Are you using the latest version of the project?

You can check your version by running helm ls|grep '^<deployment-name>' or using pctl, pctl version, and provide the output.

Version 0.2.0

What is your environment setup? Please tell us your cloud provider, operating system, and include the output of kubectl version --output=yaml and helm version. Any other information that you have, eg. logs and custom values, is highly appreciated!

(optional) If you have ideas on why the bug happens or how it can be solved, please provide it here

Had to update ztka ingress manually

  • [X ] I've described the bug, included steps to reproduce it, and included my environment setup with all customizations.
  • [X ] I'm using the latest version of the project.

Deploying Paralus to Autopilot GKE failed - policy violation

Expected vs actual behavior

  • Should install Paralus on Autopilot GKE using Helm chart

Steps to reproduce the bug

  1. Install Paralus ZTKA helm chart on Autopilot GKE cluster

Are you using the latest version of the project?

You can check your version by running helm ls|grep '^<deployment-name>' or using pctl, pctl version, and provide the output.

  • Helm chart ZTKA v0.1.0

What is your environment setup? Please tell us your cloud provider, operating system, and include the output of kubectl version --output=yaml and helm version. Any other information that you have, eg. logs and custom values, is highly appreciated!

Error log:

Error: release paralus failed, and has been uninstalled due to atomic being set: admission webhook "gkepolicy.common-webhooks.networking.gke.io" denied the request: GKE Policy Controller rejected the request because it violates one or more policies: {"[denied by autogke-no-write-mode-hostpath]":["hostPath volume data in container filebeat is accessed in write mode; disallowed in Autopilot. Requested by user: '[email protected]', groups: 'system:authenticated'.","hostPath volume varlibdockercontainers used in container filebeat uses path /var/lib/docker/containers which is not allowed in Autopilot. Allowed path prefixes for hostPath volumes are: [/var/log/]. Requested by user: '[email protected]', groups: 'system:authenticated'.","hostPath volume varrundockersock used in container filebeat uses path /var/run/docker.sock which is not allowed in Autopilot. Allowed path prefixes for hostPath volumes are: [/var/log/]. Requested by user: '[email protected]', groups: 'system:authenticated'."]}

(optional) If you have ideas on why the bug happens or how it can be solved, please provide it here

  • Checkout policy - autogke-no-write-mode-hostpath
  • And this - hostPath volume data in container filebeat is accessed in write mode
  • I've described the bug, included steps to reproduce it, and included my environment setup with all customizations.
  • I'm using the latest version of the project.

Add option in value-file for configuring a different http(s)-port

Briefly describe the feature

I am using Paralus within an infrastructure where there is already an ingress-nginx running on port 80 and 443. As I currently can not route the backend-wildcard-urls (*.core-connector.example.com, *.user.example.com) with SSL-passthrough over my ingress-nginx (see kubernetes/ingress-nginx#9473), I have to use Contour in co-existence with my current ingress.
As Ports 80 and 443 are already reserved by my ingress-nginx and I don't want to use a second IP for my cluster, I want to put Contour on two different Ports (ex. 8080, 4433).
It would be nice to have an option for that on the helm-chart, as there are multiple places which need to be configured:

  • Contour-Service
  • Relay-Agent configuration (address the *.core-connector via desired port)
  • Prompt configuration (address the *.user-backend via desired port)
  • Kubeconfig
  • Maybe other service configs (?)

TL;DR:
I need to change the ports Paralus is using for Contour and have the backend services configured accordingly by the helm-chart.

What problem does this feature solve? Please link any relevant documentation or Issues

  • Paralus can be used together with another ingress than Contour
  • There will be more config-options, see #55

(optional) What is your current workaround?

  • A lot of hacky configs and own images in an own wrapper-chart for this chart

relay pod is restarting multiple time when installing chart

Expected vs actual behavior

Actual:
relay-server pod is restarting multiple time when installation helm chart.

$ kubectl get pod
NAME                                        READY   STATUS             RESTARTS        AGE
dashboard-858c966799-tqlxw                  1/1     Running            0               6m57s
myrelease-contour-contour-86768d474-6kj44   1/1     Running            0               6m57s
myrelease-contour-envoy-gmwh8               2/2     Running            0               6m57s
myrelease-fluent-bit-v7vs7                  1/1     Running            1 (3m50s ago)   6m57s
myrelease-kratos-6dd4b64b79-8h6vv           1/2     Running            0               6m57s
myrelease-kratos-courier-0                  2/2     Running            0               6m57s
myrelease-postgresql-0                      1/1     Running            0               6m57s
paralus-7694dccf6-v5dkv                     0/2     Init:2/3           0               6m57s
prompt-6d9cf67478-8r65k                     2/2     Running            0               6m57s
relay-server-5bcdfb68bc-d74k4               1/2     CrashLoopBackOff   4 (50s ago)     6m57s

Error messages from relay-sever container:

{"level":"error","ts":"2023-03-30T13:41:52.690Z","caller":"relay/relay.go:652","msg":"Relay Server::failed to register relay with peer-service-bootstrap service, will retry error: context deadline exceeded  ","stacktrace":"github.com/paralus/relay/pkg/relay.relayServerBootStrap\n\t/build/pkg/relay/relay.go:652\ngithub.com/paralus/relay/pkg/relay.RunRelayServer\n\t/build/pkg/relay/relay.go:794"}
{"level":"error","ts":"2023-03-30T13:42:22.691Z","caller":"relay/relay.go:378","msg":"Relay Server::failed to register peering relay error: context deadline exceeded  ","stacktrace":"github.com/paralus/relay/pkg/relay.registerRelayPeerService\n\t/build/pkg/relay/relay.go:378\ngithub.com/paralus/relay/pkg/relay.relayServerBootStrap\n\t/build/pkg/relay/relay.go:650\ngithub.com/paralus/relay/pkg/relay.RunRelayServer\n\t/build/pkg/relay/relay.go:794"}
{"level":"error","ts":"2023-03-30T13:42:22.692Z","caller":"relay/relay.go:652","msg":"Relay Server::failed to register relay with peer-service-bootstrap service, will retry error: context deadline exceeded  ","stacktrace":"github.com/paralus/relay/pkg/relay.relayServerBootStrap\n\t/build/pkg/relay/relay.go:652\ngithub.com/paralus/relay/pkg/relay.RunRelayServer\n\t/build/pkg/relay/relay.go:794"}

Expected:
Pod should not restart but can wait for depending services to come up.

Steps to reproduce the bug

Install Paralus helm chart as per document on any environment.

Are you using the latest version of the project?

What is your environment setup? Please tell us your cloud provider, operating system, and include the output of kubectl version --output=yaml and helm version. Any other information that you have, eg. logs and custom values, is highly appreciated!

All environments.

(optional) If you have ideas on why the bug happens or how it can be solved, please provide it here

Relay pod keep hitting paralus endpoint and it fails. Paralus pod takes extra time to come up.

  • I've described the bug, included steps to reproduce it, and included my environment setup with all customizations.
  • I'm using the latest version of the project.

More config options in helm-charts

Briefly describe the feature

  • Don't automatically add sslmode=disable when constructing pg urls (ref)
  • Provide option to specify port along with other values (ref)

What problem does this feature solve? Please link any relevant documentation or Issues

  • More compete config options

(optional) What is your current workaround?

  • Use .Values.deploy.postgresql.dsn instead.

Error while trying to rest default admin user password

Expected vs actual behavior

trying to reset the password but getting below error
{"code":2,"message":"unable to generate recovery url"}
image

Steps to reproduce the bug

  1. Fresh install using quick start guide
  2. click reset password of admin user

Are you using the latest version of the project?

You can check your version by running helm ls|grep '^<deployment-name>' or using pctl, pctl version, and provide the output.

❯ helm ls -A
NAME     	NAMESPACE	REVISION	UPDATED                              	STATUS  	CHART     	APP VERSION
myrelease	paralus  	1       	2024-07-22 13:06:01.588546 +0200 CEST	deployed	ztka-0.2.9	v0.2.8

What is your environment setup? Please tell us your cloud provider, operating system, and include the output of kubectl version --output=yaml and helm version. Any other information that you have, eg. logs and custom values, is highly appreciated!

(optional) If you have ideas on why the bug happens or how it can be solved, please provide it here

  • [x ] I've described the bug, included steps to reproduce it, and included my environment setup with all customizations.
  • [ x] I'm using the latest version of the project.

Refactor per branding

  • change the repo name
  • changes to all references to Rafay
  • change the console urls, deployment names etc.
  • Update notes section

Contour CRDs are installing even when contour dependency is disabled

This issue is referring to the changes made in PR #14.

The contour subchart addition comes with its CRDs which are currently placed in the crds/ directory of this chart. The reason we kept crds in our own chart is to solve unable to recognize HTTPProxy kind error. (same error as mentioned in this issue)

These CRDs are installed independently whether a user enabled contour dependency or not. The --skip-crds option to helm install command skips installing crds. But currently we don't have option to manage CRDs based on enable/disable state of dependencies.

Potential solutions:

  1. Separate chart for CRDs as suggested here.
  2. Apply --skip-crds flag when installing chart with contour dependency turned off.

Not able to view audit logs in paralus dasboard /audit-logs table is empty

Describe the issue you're facing

-Paralus chart version : 0.2.4

audit-logs enabled with database storage:
auditLogs:
storage: "database"
deploy:
postgresql:
enable: true
kratos:
kratos:
development: true
fluent-bit:
existingConfigMap: "fluentbit-config"

On Dashbaord , KUBECTL , SYSTEM logs are showing as no data avaiable.

logs are not populating to audit_logs table
admindb=> select * from audit_logs;
tag | time | data
-----+------+------
(0 rows)

$ kubectl logs relay-server-768959db4d-qnpcl -c relay-tail -n paralus | grep -i audit
{"level":"info","ts":"2023-05-09T16:33:45.568Z","caller":"tail/run.go:86","msg":"relay server setup values","podName":"relay-server-768959db4d-qnpcl","podNamespace":"paralus","relayPeeringURI":"https://paralus:10001","auditPath":"/audit-logs"}

prompt logs

$ kubectl logs prompt-6768544c99-srnfl -c prompt-tail -n paralus | grep -i audit
{"timestamp":"2023-05-10T08:19:55.035747756Z","version":"1.0","category":"AUDIT","origin":"cluster","actor":{"type":"USER","account":{"username":"[email protected]"},"groups":["Organization Admins","All Local Users"]},"client":{"type":"BROWSER","ip":"10.188.213.208","user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36","host":"console.paralus.slvr-corp-corpdemo1.awsdns.internal.das"},"detail":{"message":"kubectl get pods","meta":{"cluster_name":"ocp-lz-sbx"}},"type":"kubectl.command.detail","portal":"ADMIN","project":"default"}
{"timestamp":"2023-05-10T08:19:59.158503903Z","version":"1.0","category":"AUDIT","origin":"cluster","actor":{"type":"USER","account":{"username":"[email protected]"},"groups":["Organization Admins","All Local Users"]},"client":{"type":"BROWSER","ip":"10.188.213.208","user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36","host":"console.paralus.slvr-corp-corpdemo1.awsdns.internal.das"},"detail":{"message":"kubectl get ns","meta":{"cluster_name":"ocp-lz-sbx"}},"type":"kubectl.command.detail","portal":"ADMIN","project":"default"}
{"timestamp":"2023-05-10T08:57:45.046801579Z","version":"1.0","category":"AUDIT","origin":"cluster","actor":{"type":"USER","account":{"username":"[email protected]"},"groups":["group-test"]},"client":{"type":"BROWSER","ip":"10.188.214.132","user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36","host":"console.paralus.slvr-corp-corpdemo1.awsdns.internal.das"},"detail":{"message":"kubectl get pods","meta":{"cluster_name":"ocp-lz-sbx"}},"type":"kubectl.command.detail","portal":"ADMIN","project":"default"}
{"timestamp":"2023-05-10T08:58:04.28194656Z","version":"1.0","category":"AUDIT","origin":"cluster","actor":{"type":"USER","account":{"username":"[email protected]"},"groups":["group-test"]},"client":{"type":"BROWSER","ip":"10.188.214.132","user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36","host":"console.paralus.slvr-corp-corpdemo1.awsdns.internal.das"},"detail":{"message":"kubectl get ns","meta":{"cluster_name":"ocp-lz-sbx"}},"type":"kubectl.command.detail","portal":"ADMIN","project":"default"}
{"timestamp":"2023-05-10T08:58:14.751577479Z","version":"1.0","category":"AUDIT","origin":"cluster","actor":{"type":"USER","account":{"username":"[email protected]"},"groups":["group-test"]},"client":{"type":"BROWSER","ip":"10.188.214.132","user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36","host":"console.paralus.slvr-corp-corpdemo1.awsdns.internal.das"},"detail":{"message":"kubectl get svc","meta":{"cluster_name":"ocp-lz-sbx"}},"type":"kubectl.command.detail","portal":"ADMIN","project":"default"}
{"timestamp":"2023-05-11T12:23:36.150538305Z","version":"1.0","category":"AUDIT","origin":"cluster","actor":{"type":"USER","account":{"username":"[email protected]"},"groups":["OCP_Admins_AWS"]},"client":{"type":"BROWSER","ip":"10.188.213.208","user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36","host":"console.paralus.slvr-corp-corpdemo1.awsdns.internal.das"},"detail":{"message":"kubectl get ns","meta":{"cluster_name":"caas-np01-slvr1-121"}},"type":"kubectl.command.detail","portal":"ADMIN","project":"eks-test"}
{"timestamp":"2023-05-16T13:43:53.15731678Z","version":"1.0","category":"AUDIT","origin":"cluster","actor":{"type":"USER","account":{"username":"[email protected]"},"groups":["OCP_Admins_AWS","OCP_Admins"]},"client":{"type":"BROWSER","ip":"10.188.214.132","user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36","host":"console.paralus.slvr-corp-corpdemo1.awsdns.internal.das"},"detail":{"message":"kubectl get nodes","meta":{"cluster_name":"caas-np01-slvr1-121"}},"type":"kubectl.command.detail","portal":"ADMIN","project":"eks-test"}
{"timestamp":"2023-05-16T13:50:42.277072361Z","version":"1.0","category":"AUDIT","origin":"cluster","actor":{"type":"USER","account":{"username":"[email protected]"},"groups":["OCP_Admins_AWS","OCP_Admins"]},"client":{"type":"BROWSER","ip":"10.188.213.208","user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36","host":"console.paralus.slvr-corp-corpdemo1.awsdns.internal.das"},"detail":{"message":"kubectl get pods","meta":{"cluster_name":"caas-np01-slvr1-121"}},"type":"kubectl.command.detail","portal":"ADMIN","project":"eks-test"}
{"timestamp":"2023-05-16T13:50:48.707048795Z","version":"1.0","category":"AUDIT","origin":"cluster","actor":{"type":"USER","account":{"username":"[email protected]"},"groups":["OCP_Admins_AWS","OCP_Admins"]},"client":{"type":"BROWSER","ip":"10.188.213.208","user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36","host":"console.paralus.slvr-corp-corpdemo1.awsdns.internal.das"},"detail":{"message":"kubectl get ns","meta":{"cluster_name":"caas-np01-slvr1-121"}},"type":"kubectl.command.detail","portal":"ADMIN","project":"eks-test"}
{"timestamp":"2023-05-16T13:57:12.42904909Z","version":"1.0","category":"AUDIT","origin":"cluster","actor":{"type":"USER","account":{"username":"[email protected]"},"groups":["Organization Admins","All Local Users"]},"client":{"type":"BROWSER","ip":"10.188.214.132","user_agent":"Mozilla/5.0
(Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36","host":"console.paralus.slvr-corp-corpdemo1.awsdns.internal.das"},"detail":{"message":"kubectl get ns","meta":{"cluster_name":"ocp-lz-sbx"}},"type":"kubectl.command.detail","portal":"ADMIN","project":"default"}
{"timestamp":"2023-05-22T04:12:22.604917635Z","version":"1.0","category":"AUDIT","origin":"cluster","actor":{"type":"USER","account":{"username":"[email protected]"},"groups":["Organization Admins","All Local Users"]},"client":{"type":"BROWSER","ip":"10.188.213.208","user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36","host":"console.paralus.slvr-corp-corpdemo1.awsdns.internal.das"},"detail":{"message":"kubectl get pods","meta":{"cluster_name":"ocp-lz-sbx"}},"type":"kubectl.command.detail","portal":"ADMIN","project":"default"}
{"timestamp":"2023-05-22T04:12:25.958451944Z","version":"1.0","category":"AUDIT","origin":"cluster","actor":{"type":"USER","account":{"username":"[email protected]"},"groups":["Organization Admins","All Local Users"]},"client":{"type":"BROWSER","ip":"10.188.213.208","user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36","host":"console.paralus.slvr-corp-corpdemo1.awsdns.internal.das"},"detail":{"message":"kubectl get ns","meta":{"cluster_name":"ocp-lz-sbx"}},"type":"kubectl.command.detail","portal":"ADMIN","project":"default"}

Specifying default `ingress.className`

The charts sometimes fails to set up properly because the ingress classname is not given. Should we set the ingress className as "nginx" by default? Was it left out intentionally?

Move kratos-automigrate under kratos init-containers

Describe the issue you're facing

  • While upgrading kratos helm chart which will be defaulted to the corresponding app version cause issue with health probe if the migration is run on a different / lower version.

Currently kratos-automigrate is present as part of paralus init container and kratos binary is packaged with it ( now if there is a mismatch between kratos app version here and one helm chart uses, health probe fails to bring kratos pod up ), explore the possibility of moving it as kratos init container to avoid this dependency

Paralus Helm isnt fully compatible with ARM

Expected vs actual behavior

[EXPECTED]
Paralus succesfully deploy on arm
[ACTUAL BEHAVIOUR]
modules fail to deploy due to their arch not supported for arm

Steps to reproduce the bug

  1. deploy paralus as in documentation on any arm kubernetes

Are you using the latest version of the project?

yes

What is your environment setup? Please tell us your cloud provider, operating system, and include the output of kubectl version --output=yaml and helm version. Any other information that you have, eg. logs and custom values, is highly appreciated!

EKS on AWS

(optional) If you have ideas on why the bug happens or how it can be solved, please provide it here

The Third party services for example contour uses old images. not only contour everything else uses very old images.
newer images of theses services contain the arm arch.
updating image tags of all these services will make paralus fully compatible for arm.

How to add exsisting IdP groups in okta to paralus

Active Directory integration is done with Okta in our Organisation.

The AD groups along with users part of it will be synced to okta. We manage multiple kubernetes clusters manage RBAC to different users with these AD groups and want to register all clusters to paralus with same RBAC.

How can I map an existing AD group to Paralus and make sure the users in that AD group has same set of permissions from paralus

Consider not relying on Helm hooks, by providing an alternative

Describe the issue you're facing

We currently use Helm hooks in a few places:

kratos restart after upgrade
paralus analytics job

This poses a challenge for deployments in places where Helm hooks is not supported. One example of this is Amazon EKS Addon, but there are others as well.

Add helm chart test

Briefly describe the feature

Add helm chart test to validate that

  • asserts all services are up
  • anything else?

What problem does this feature solve? Please link any relevant documentation or Issues

Verify helm chart after installation.

(optional) What is your current workaround?

Check all pods manually.

Add support for existing K8s secret for database credentials

Briefly describe the feature

  • Reference an existing for Kubernetes secret to read the credentials for the database, instead of providing them inline in the valus file.

What problem does this feature solve? Please link any relevant documentation or Issues

  • It allows deployment of Paralus Helm Chart via GitOps (e.g. ArgoCD), by securing the secrets in an external source and retrieving them via External Secrets Operator or other similar mechanism

(optional) What is your current workaround?

  • N/A

Helm Chart: tolerations are not applied to all the components

Expected vs actual behavior

When assigning tolerations, these are applied just to a small number of components and are not honoured for these following ones:

  • fluent-bit
  • kratos-courier
  • kratos

Steps to reproduce the bug

  1. Install the Helm Chart with some tolerations by using the tolerations key
  2. Check the components Kubernetes objects, such as DaemonSet, Deployment, or StatefulSet

Are you using the latest version of the project?

Helm Version ztka-0.2.4

What is your environment setup? Please tell us your cloud provider, operating system, and include the output of kubectl version --output=yaml and helm version. Any other information that you have, eg. logs and custom values, is highly appreciated!

N.R.

(optional) If you have ideas on why the bug happens or how it can be solved, please provide it here

  • I've described the bug, included steps to reproduce it, and included my environment setup with all customizations.

  • I'm using the latest version of the project.

Replace fixed resource names with unique to release name

Describe the issue you're facing

Hard-coding the name: into a resource is usually considered to be bad practice. Names should be unique to a release. Fixed name can cause the naming conflict if more than one instance of Paralus are deployed to a same namespace.

Create a helper function to generate unique name similar to this. And use that in place of hard coded name.

metadata:
  name: {{ include "ztka.fullname" . }}-dashboard

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.