Code Monkey home page Code Monkey logo

argoflow's Introduction

Argoflow


⚠️ Argoflow has been superseded by deployKF ⚠️

deployKF makes it easy to build reliable ML Platforms on Kubernetes and supports more than just Kubeflow!



Original README

This repository contains Kustomize manifests that point to the upstream manifest of each Kubeflow component and provides an easy way for people to change their deployment according to their need. ArgoCD application manifests for each component will be used to deploy Kubeflow. The intended usage is for people to fork this repository, make their desired kustomizations, run a script to change the ArgoCD application specs to point to their fork of this repository, and finally apply a master ArgoCD application that will deploy all other applications.

To run the below script yq version 4 must be installed

Overview of the steps:

  • fork this repo
  • modify the kustomizations for your purpose
  • run ./setup_repo.sh <your_repo_fork_url>
  • commit and push your changes
  • install ArgoCD
  • run kubectl apply -f kubeflow.yaml

Folder setup

Root files

Prerequisite

  • kubectl (latest)
  • kustomize 4.0.5
  • docker (if using kind)
  • yq 4.x

Quick Start using kind

Install kind

On linux:

curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.10.0/kind-linux-amd64
chmod +x ./kind
mv ./kind /<some-dir-in-your-PATH>/kind

On Mac:

curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.10.0/kind-darwin-amd64
chmod +x ./kind
mv ./kind /<some-dir-in-your-PATH>/kind

On Windows:

curl.exe -Lo kind-windows-amd64.exe https://kind.sigs.k8s.io/dl/v0.10.0/kind-windows-amd64
Move-Item .\kind-windows-amd64.exe c:\some-dir-in-your-PATH\kind.exe

Deploy kind cluster

Note - This will overwrite any existing ~/.kube/config file Please back up your current file if it already exists

kind create cluster --config kind/kind-cluster.yaml

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
kubectl patch deployment metrics-server -n kube-system -p '{"spec":{"template":{"spec":{"containers":[{"name":"metrics-server","args":["--cert-dir=/tmp", "--secure-port=4443", "--kubelet-insecure-tls","--kubelet-preferred-address-types=InternalIP"]}]}}}}'

Deploy MetalLB

Edit the IP range in configmap.yaml so that it is within the range of your docker network. To get your docker network range, run the following command:

docker network inspect -f '{{.IPAM.Config}}' kind

After updating the metallb configmap, deploy it by running:

kustomize build metallb/ | kubectl apply -f -

Deploy Argo CD

Deploy Argo CD with the following commaind:

kustomize build argocd/ | kubectl apply -f -

Expose Argo CD with a LoadBalancer to access the UI by executing:

kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'

Get the IP of the Argo CD endpoint:

kubectl get svc argocd-server -n argocd

Login with the username admin and the output of the following command as the password:

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

Deploy Kubeflow

To deploy Kubeflow, execute the following command:

kubectl apply -f kubeflow.yaml

Note - This deploys all components of Kubeflow 1.3, it might take a while for everything to get started. Also, it is unknown what hardware specifications are needed for this at the current time, so your mileage may vary. Also, this deployment is using the manifests in this repository directly. For instructions how to customize the deployment and have Argo CD use those manifests see the next section.

Get the IP of the Kubeflow gateway with the following command:

kubectl get svc istio-ingressgateway -n istio-system

Login to Kubeflow with "email-address" [email protected] and password 12341234

Remove kind cluster

Run: kind delete cluster

Installing ArgoCD

For this installation the HA version of ArgoCD is used. Due to Pod Tolerations, 3 nodes will be required for this installation. If you do not wish to use a HA installation of ArgoCD, edit this kustomization.yaml and remove /ha from the URI.

  1. Next, to install ArgoCD execute the following command:

    kustomize build argocd/ | kubectl apply -f -
  2. Install the ArgoCD CLI tool from here

  3. Access the ArgoCD UI by exposing it through a LoadBalander, Ingress or by port-fowarding using kubectl port-forward svc/argocd-server -n argocd 8080:443

  4. Login to the ArgoCD CLI. First get the default password for the admin user: kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

    Next, login with the following command: argocd login <ARGOCD_SERVER> # e.g. localhost:8080 or argocd.example.com

    Finally, update the account password with: argocd account update-password

  5. You can now login to the ArgoCD UI with your new password. This UI will be handy to keep track of the created resources while deploying Kubeflow.

Note - Argo CD needs to be able access your repository to deploy applications. If the fork of this repository that you are planning to use with Argo CD is private you will need to add credentials so it can access the repository. Please see the instructions provided by Argo CD here.

Installing Kubeflow

The purpose of this repository is to make it easy for people to customize their Kubeflow deployment and have it managed through a GitOps tool like ArgoCD. First, fork this repository and clone your fork locally. Next, apply any customization you require in the kustomize folders of the Kubeflow applications. Next will follow a set of recommended changes that we encourage everybody to make.

Credentials

The default username, password and namespace of this deployment are: user, 12341234 and kubeflow-user respectively. To change these, edit the user and profile-name (the namespace for this user) in params.env.

Next, in configmap-path.yaml under staticPasswords, change the email, the hash and the username for your used account.

staticPasswords:
- email: user
  hash: $2y$12$4K/VkmDd1q1Orb3xAt82zu8gk7Ad6ReFR4LCP9UeYE90NLiN9Df72
  username: user

The hash is the bcrypt has of your password. You can generate this using this website, or with the command below:

python3 -c 'from passlib.hash import bcrypt; import getpass; print(bcrypt.using(rounds=12, ident="2y").hash(getpass.getpass()))'

To add new static users to Dex, you can add entries to the configmap-path.yaml and set a password as described above.If you have already deployed Kubeflow commit these changes to your fork so Argo CD detects them. You will also need to kill the Dex pod or restart the dex deployment. This can be done in the Argo CD UI, or by running the following command:

kubectl rollout restart deployment dex -n auth

Ingress and Certificate

By default the Istio Ingress Gateway is setup to use a LoadBalancer and to redirect HTTP traffic to HTTPS. Manifests for MetalLB are provided to make it easier for users to use a LoadBalancer Service. Edit the configmap.yaml and set a range of IP addresses MetalLB can use under data.config.address-pools.addresses. This must be in the same subnet as your cluster nodes.

If you do not wish to use a LoadBalancer, change the spec.type in gateway-service.yaml to NodePort.

To provide HTTPS out-of-the-box, the kubeflow-self-signing-issuer used by internal Kubeflow applications is setup to provide a certificate for the Istio Ingress Gateway.

To use a different certificate for the Ingress Gateway, change the spec.issuerRef.name to the cert-manager ClusterIssuer you would like to use in ingress-certificate.yaml and set the spec.commonName and spec.dnsNames[0] to your Kubeflow domain.

If you would like to use LetsEncrypt, a ClusterIssuer template if provided in letsencrypt-cluster-issuer.yaml. Edit this file according to your requirements and uncomment the line in the kustomization.yaml file so it is included in the deployment.

Customizing the Jupyter Web App

To customize the list of images presented in the Jupyter Web App and other related setting such as allowing custom images, edit the spawner_ui_config.yaml file.

Change ArgoCD application specs and commit

To simplify the process of telling ArgoCD to use your fork of this repo, a script is provided that updates the spec.source.repoURL of all the ArgoCD application specs. Simply run:

./setup_repo.sh <your_repo_fork_url>

If you need to target a specific branch or release on your for you can add a second argument to the script to specify it.

./setup_repo.sh <your_repo_fork_url> <your_branch_or_release>

To change what Kubeflow or third-party componenets are included in the deployment, edit the root kustomization.yaml and comment or uncomment the components you do or don't want.

Next, commit your changes and push them to your repository.

Deploying Kubeflow

Once you've commited and pushed your changes to your repository, you can either choose to deploy componenet individually or deploy them all at once. For example, to deploy a single component you can run:

kubectl apply -f argocd-applications/kubeflow-roles-namespaces.yaml

To deploy everything specified in the root kustomization.yaml, execute:

kubectl apply -f kubeflow.yaml

After this, you should start seeing applications being deployed in the ArgoCD UI and what the resources each application create.

Updating the deployment

By default, all the ArgoCD application specs included here are setup to automatically sync with the specified repoURL. If you would like to change something about your deployment, simply make the change, commit it and push it to your fork of this repo. ArgoCD will automatically detect the changes and update the necessary resources in your cluster.

Bonus: Extending the Volumes Web App with a File Browser

A large problem for many people is how to easily upload or download data to and from the PVCs mounted as their workspace volumes for Notebook Servers. To make this easier a simple PVCViewer Controller was created (a slightly modified version of the tensorboard-controller). This feature was not ready in time for 1.3, and thus I am only documenting it here as an experimental feature as I believe many people would like to have this functionality. The images are grabbed from my personal dockerhub profile, but I can provide instructions for people that would like to build the images themselves. Also, it is important to note that the PVC Viewer will work with ReadWriteOnce PVCs, even when they are mounted to an active Notebook Server.

Here is an example of the PVC Viewer in action:

PVCViewer in action

To use the PVCViewer Controller, it must be deployed along with an updated version of the Volumes Web App. To do so, deploy experimental-pvcviewer-controller.yaml and experimental-volumes-web-app.yaml instead of the regular Volumes Web App. If you are deploying Kubeflow with the kubeflow.yaml file, you can edit the root kustomization.yaml and comment out the regular Volumes Web App and uncomment the PVCViewer Controller and Experimental Volumes Web App.

Troubleshooting

I can't get letsencrypt to work. The cert-manager logs show 404 errors.

The letsencrypt HTTP-01 challenge is incompatible with using OIDC (Link). If your DNS server allows programmatic access, use the DNS-01 challenge solver instead.

I am having problems getting the deployment to run on a cluster deployed with kubeadm and/or kubespray.

The kube-apiserver needs additional arguments if your are running a kubenetes version below the recommended version 1.20: --service-account-issuer=kubernetes.default.svc and --service-account-signing-key-file=/etc/kubernetes/ssl/sa.key.

If your are using kubespray, add the following snipped to your group_vars:

kube_kubeadm_apiserver_extra_args: 
  service-account-issuer: kubernetes.default.svc
  service-account-signing-key-file: /etc/kubernetes/ssl/sa.key

I have unbound PVCs with rook-ceph.

Note that the rook deployment shipped with ArgoFlow requires a HA setup with at least 3 nodes.

Make sure, that there is a clean partition or drive available for rook to use.

Change the deviceFilter in cluster-patch.yaml to match the drives you want to use. For nvme drives change the filter to ^nvme[0-9]. In case your have previously deployed rook on any of the disks, format them, remove the folder /var/lib/rook on all nodes, and reboot. Alternatively, follow the rook-ceph disaster recover guide to adopt an existing rook-ceph cluster.

argoflow's People

Contributors

benjamintanweihao avatar davidspek avatar gecube avatar haoxins avatar laserk3000 avatar renovate-bot avatar renovate[bot] avatar thesuperzapper avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

argoflow's Issues

Action Required: Fix Renovate Configuration

There is an error with this repository's Renovate configuration that needs to be fixed. As a precaution, Renovate will stop PRs until it is resolved.

Location: .github/renovate.json5
Error type: Invalid JSON5 (parsing failed)
Message: JSON5.parse error: JSON5: invalid character '{' at 68:3

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Repository problems

These problems occurred while renovating this repository.

  • WARN: Error updating branch: update failure

Pending Approval

These branches will be created by Renovate only once you click their checkbox below.

  • chore(deps): update actions/checkout action to v3
  • chore(deps): update dependency kubernetes-csi/external-snapshotter to v6
  • chore(deps): update helm release external-dns to v6
  • chore(deps): update helm release gpu-operator to v22
  • chore(deps): update helm release keycloak to v12
  • chore(deps): update helm release kube-prometheus-stack to v41
  • chore(deps): update helm release oauth2-proxy to v6
  • chore(deps): update helm release sealed-secrets to v2
  • chore(deps): update kubeflow-pipelines (major) (gcr.io/ml-pipeline/mysql, gcr.io/ml-pipeline/workflow-controller, gcr.io/tfx-oss-public/ml_metadata_store_server)
  • chore(deps): update redis docker tag to v7
  • 🔐 Create all pending approval PRs at once 🔐

Rate-Limited

These updates are currently rate-limited. Click on a checkbox below to force their creation now.

  • chore(deps): update knative (gcr.io/knative-releases/knative.dev/eventing/cmd/controller, gcr.io/knative-releases/knative.dev/eventing/cmd/mtping, gcr.io/knative-releases/knative.dev/eventing/cmd/webhook, gcr.io/knative-releases/knative.dev/net-istio/cmd/controller, gcr.io/knative-releases/knative.dev/net-istio/cmd/webhook, gcr.io/knative-releases/knative.dev/serving/cmd/activator, gcr.io/knative-releases/knative.dev/serving/cmd/autoscaler, gcr.io/knative-releases/knative.dev/serving/cmd/controller, gcr.io/knative-releases/knative.dev/serving/cmd/queue, gcr.io/knative-releases/knative.dev/serving/cmd/webhook)
  • chore(deps): update dependency horizontalpodautoscaler to autoscaling/v2
  • chore(deps): update dependency metallb/metallb to v0.10.3
  • chore(deps): update dependency poddisruptionbudget to policy/v1
  • chore(deps): update helm release gpu-operator to v1.8.2
  • chore(deps): update metallb/controller docker tag to v0.10.3
  • chore(deps): update metallb/speaker docker tag to v0.10.3
  • chore(deps): update alpine docker tag to v3.17
  • chore(deps): update dependency kubeflow/katib to v0.14.0
  • chore(deps): update dependency kubeflow/kubeflow to v1.6.1
  • chore(deps): update dependency kubeflow/pipelines to v1.8.16
  • chore(deps): update dependency metallb/metallb to v0.13.7
  • chore(deps): update docker.io/kubeflowkatib/katib-controller docker tag to v0.14.0
  • chore(deps): update docker.io/kubeflowkatib/katib-db-manager docker tag to v0.14.0
  • chore(deps): update helm release gpu-operator to v1.11.1
  • chore(deps): update helm release loki-stack to v2.8.7
  • chore(deps): update helm release oauth2-proxy to v4.2.2
  • chore(deps): update istio/operator docker tag to v1.16.0
  • chore(deps): update kubeflow-pipelines to v1.8.5 (minor) (gcr.io/ml-pipeline/api-server, gcr.io/ml-pipeline/cache-deployer, gcr.io/ml-pipeline/cache-server, gcr.io/ml-pipeline/frontend, gcr.io/ml-pipeline/metadata-envoy, gcr.io/ml-pipeline/metadata-writer, gcr.io/ml-pipeline/persistenceagent, gcr.io/ml-pipeline/scheduledworkflow, gcr.io/ml-pipeline/viewer-crd-controller, gcr.io/ml-pipeline/visualization-server)
  • chore(deps): update metallb/controller docker tag to v0.12.1
  • chore(deps): update metallb/speaker docker tag to v0.12.1
  • chore(deps): update public.ecr.aws/j1r0q0g6/notebooks/access-management docker tag to v1.5.0
  • chore(deps): update public.ecr.aws/j1r0q0g6/notebooks/admission-webhook docker tag to v1.5.0
  • chore(deps): update public.ecr.aws/j1r0q0g6/notebooks/central-dashboard docker tag to v1.5.0
  • chore(deps): update public.ecr.aws/j1r0q0g6/notebooks/jupyter-web-app docker tag to v1.5.0
  • chore(deps): update public.ecr.aws/j1r0q0g6/notebooks/notebook-controller docker tag to v1.5.0
  • chore(deps): update public.ecr.aws/j1r0q0g6/notebooks/profile-controller docker tag to v1.5.0
  • chore(deps): update public.ecr.aws/j1r0q0g6/notebooks/tensorboards-web-app docker tag to v1.5.0
  • chore(deps): update public.ecr.aws/j1r0q0g6/notebooks/volumes-web-app docker tag to v1.5.0
  • chore(deps): update rook-ceph to v1.10.6 (minor) (rook/ceph, rook/rook)
  • 🔐 Create all rate-limited PRs at once 🔐

Errored

These updates encountered an error and will be retried. Click on a checkbox below to force a retry now.


⚠ Dependency Lookup Warnings ⚠

  • Renovate failed to look up the following dependencies: gcr.io/kfserving/kfserving-controller.

Files affected: distribution/kubeflow/kfserving/kustomization.yaml


Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

argocd
distribution/argocd-applications/argocd-private-repo.yaml
distribution/argocd-applications/argocd.yaml
distribution/argocd-applications/central-dashboard.yaml
distribution/argocd-applications/cert-manager-dns-01.yaml
distribution/argocd-applications/cert-manager-self-signing.yaml
distribution/argocd-applications/certificates-imported.yaml
distribution/argocd-applications/certificates.yaml
distribution/argocd-applications/cloudflare-secrets.yaml
distribution/argocd-applications/experimental-pvcviewer-controller.yaml
distribution/argocd-applications/experimental-volumes-web-app.yaml
distribution/argocd-applications/external-dns.yaml
  • external-dns 5.1.1
distribution/argocd-applications/istio-operator.yaml
distribution/argocd-applications/istio-resources.yaml
distribution/argocd-applications/istio.yaml
distribution/argocd-applications/jupyter-web-app.yaml
distribution/argocd-applications/katib.yaml
distribution/argocd-applications/kfserving.yaml
distribution/argocd-applications/kiali.yaml
  • kiali-operator 1.35.0
distribution/argocd-applications/knative.yaml
distribution/argocd-applications/kube-prometheus-stack.yaml
  • kube-prometheus-stack 16.10.0
distribution/argocd-applications/kubecost-resources.yaml
distribution/argocd-applications/kubecost.yaml
  • cost-analyzer 1.81.0
distribution/argocd-applications/kubeflow-namespace.yaml
distribution/argocd-applications/kubeflow-profiles.yaml
distribution/argocd-applications/kubeflow-roles.yaml
distribution/argocd-applications/loki-stack.yaml
  • loki-stack 2.4.1
distribution/argocd-applications/metallb.yaml
distribution/argocd-applications/mlflow.yaml
distribution/argocd-applications/monitoring-resources.yaml
distribution/argocd-applications/mpi-operator.yaml
distribution/argocd-applications/mxnet-operator.yaml
distribution/argocd-applications/nginx.yaml
distribution/argocd-applications/notebook-controller.yaml
distribution/argocd-applications/nvidia-gpu-operator.yaml
  • gpu-operator v1.8.1
distribution/argocd-applications/oidc-auth-external.yaml
distribution/argocd-applications/oidc-auth-on-cluster-dex.yaml
distribution/argocd-applications/oidc-auth-on-cluster-keycloak.yaml
distribution/argocd-applications/pipelines.yaml
distribution/argocd-applications/pod-defaults.yaml
distribution/argocd-applications/profile-controller_access-management.yaml
distribution/argocd-applications/pytorch-operator.yaml
distribution/argocd-applications/rook-ceph.yaml
distribution/argocd-applications/sealed-secrets.yaml
  • sealed-secrets 1.16.1
distribution/argocd-applications/tensorboard-controller.yaml
distribution/argocd-applications/tensorboards-web-app.yaml
distribution/argocd-applications/tensorflow-operator.yaml
distribution/argocd-applications/volumes-web-app.yaml
distribution/argocd-applications/xgboost-operator.yaml
distribution/oidc-auth/base/oauth2-proxy.yaml
  • oauth2-proxy 4.0.5
distribution/oidc-auth/overlays/dex/dex.yaml
  • dex 0.4.0
distribution/oidc-auth/overlays/keycloak/keycloak.yaml
  • keycloak 3.1.1
github-actions
.github/workflows/release.yaml
  • actions/checkout v2
  • cycjimmy/semantic-release-action v2
kubernetes
distribution/argocd/base/patches/add-custom-kustomize.yaml
  • alpine 3.8
  • Deployment apps/v1
distribution/istio-resources/kubeflow-cluster-roles.yaml
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
distribution/knative/eventing-core-v0_22_0.yaml
  • gcr.io/knative-releases/knative.dev/eventing/cmd/controller sha256:b02cfc6d0858de1ae6d5d5acbe1ac2ed1c5411f2adcec417c2b113b3b3274e4a
  • gcr.io/knative-releases/knative.dev/eventing/cmd/mtping sha256:ac30b62fa390b01c24a9bb891c4b8aa7a8c6c747a5182592d81af58d65eaa65c
  • gcr.io/knative-releases/knative.dev/eventing/cmd/webhook sha256:5f037fe6755fb85fb0a155f9892c8519a058dbf395d2d04b2c6769ffd2d68950
  • ClusterRoleBinding rbac.authorization.k8s.io/v1
  • ClusterRoleBinding rbac.authorization.k8s.io/v1
  • ClusterRoleBinding rbac.authorization.k8s.io/v1
  • ClusterRoleBinding rbac.authorization.k8s.io/v1
  • ClusterRoleBinding rbac.authorization.k8s.io/v1
  • ClusterRoleBinding rbac.authorization.k8s.io/v1
  • ClusterRoleBinding rbac.authorization.k8s.io/v1
  • RoleBinding rbac.authorization.k8s.io/v1
  • ClusterRoleBinding rbac.authorization.k8s.io/v1
  • ClusterRoleBinding rbac.authorization.k8s.io/v1
  • Deployment apps/v1
  • Deployment apps/v1
  • HorizontalPodAutoscaler autoscaling/v2beta2
  • PodDisruptionBudget policy/v1beta1
  • Deployment apps/v1
  • CustomResourceDefinition apiextensions.k8s.io/v1
  • CustomResourceDefinition apiextensions.k8s.io/v1
  • CustomResourceDefinition apiextensions.k8s.io/v1
  • CustomResourceDefinition apiextensions.k8s.io/v1
  • CustomResourceDefinition apiextensions.k8s.io/v1
  • CustomResourceDefinition apiextensions.k8s.io/v1
  • CustomResourceDefinition apiextensions.k8s.io/v1
  • CustomResourceDefinition apiextensions.k8s.io/v1
  • CustomResourceDefinition apiextensions.k8s.io/v1
  • CustomResourceDefinition apiextensions.k8s.io/v1
  • CustomResourceDefinition apiextensions.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • Role rbac.authorization.k8s.io/v1
  • ValidatingWebhookConfiguration admissionregistration.k8s.io/v1
  • MutatingWebhookConfiguration admissionregistration.k8s.io/v1
  • ValidatingWebhookConfiguration admissionregistration.k8s.io/v1
  • MutatingWebhookConfiguration admissionregistration.k8s.io/v1
distribution/knative/net-istio-v0_22_0.yaml
  • gcr.io/knative-releases/knative.dev/net-istio/cmd/controller sha256:17ee40a68cda50772375dcc4230efa99e7f8666a050ad2ffcd0338ff31c1bfaa
  • gcr.io/knative-releases/knative.dev/net-istio/cmd/webhook sha256:1da4b47f1778005b3cf07d384cac27c8c688628b9e7e631f15dd6ac3456c3039
  • ClusterRole rbac.authorization.k8s.io/v1
  • MutatingWebhookConfiguration admissionregistration.k8s.io/v1
  • ValidatingWebhookConfiguration admissionregistration.k8s.io/v1
  • Deployment apps/v1
  • Deployment apps/v1
distribution/knative/serving-core-v0_22_0.yaml
  • gcr.io/knative-releases/knative.dev/serving/cmd/queue sha256:6cd0c234bfbf88ac75df5243c2f9213dcc9def610414c506d418f9388187b771
  • gcr.io/knative-releases/knative.dev/serving/cmd/activator sha256:91e67a579378fa39d7c941e379db183464c3add3d53b4617f65d9cbc2f0c770a
  • gcr.io/knative-releases/knative.dev/serving/cmd/autoscaler sha256:761dc36210e69ebef3a64ce72ad9f54f8172e4aed6b97e8a706e3128956ec54d
  • gcr.io/knative-releases/knative.dev/serving/cmd/controller sha256:d772809059033e437d6e98248a334ded37b6f430c2ca23377875cc2459a3b73e
  • gcr.io/knative-releases/knative.dev/serving/cmd/webhook sha256:268bd1383b56ba7b9acf391c681f7a63780c22dcd4555c2f4a7b61ec6da81cf4
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRole rbac.authorization.k8s.io/v1
  • ClusterRoleBinding rbac.authorization.k8s.io/v1
  • CustomResourceDefinition apiextensions.k8s.io/v1
  • CustomResourceDefinition apiextensions.k8s.io/v1
  • CustomResourceDefinition apiextensions.k8s.io/v1
  • CustomResourceDefinition apiextensions.k8s.io/v1
  • CustomResourceDefinition apiextensions.k8s.io/v1
  • CustomResourceDefinition apiextensions.k8s.io/v1
  • CustomResourceDefinition apiextensions.k8s.io/v1
  • CustomResourceDefinition apiextensions.k8s.io/v1
  • CustomResourceDefinition apiextensions.k8s.io/v1
  • CustomResourceDefinition apiextensions.k8s.io/v1
  • HorizontalPodAutoscaler autoscaling/v2beta2
  • PodDisruptionBudget policy/v1beta1
  • Deployment apps/v1
  • Deployment apps/v1
  • Deployment apps/v1
  • HorizontalPodAutoscaler autoscaling/v2beta2
  • PodDisruptionBudget policy/v1beta1
  • Deployment apps/v1
  • ValidatingWebhookConfiguration admissionregistration.k8s.io/v1
  • MutatingWebhookConfiguration admissionregistration.k8s.io/v1
  • ValidatingWebhookConfiguration admissionregistration.k8s.io/v1
distribution/kubeflow/notebooks/central-dashboard/enable-registration-flow.yaml
  • Deployment apps/v1
distribution/mlflow/deployment.yaml
  • Deployment apps/v1
distribution/nginx/deployment_patch.yaml
  • Deployment apps/v1
distribution/rook-ceph/cluster-patch.yaml
  • ceph/ceph v16.2.4
distribution/rook-ceph/dashboard-ingress.yaml
  • Ingress extensions/v1beta1
distribution/rook-ceph/monitoring/dashboard-set-grafana-uri.yaml
distribution/rook-ceph/rgw-external-ingress.yaml
  • Ingress extensions/v1beta1
distribution/rook-ceph/set-default-storage.yaml
  • StorageClass storage.k8s.io/v1
kustomize
distribution/argocd/base/kustomization.yaml
  • ghcr.io/dexidp/dex v2.27.0
  • quay.io/argoproj/argocd v2.0.3
  • haproxy 2.0.20-alpine
  • redis 6.2.1-alpine
distribution/argocd/overlays/private-repo/kustomization.yaml
  • ghcr.io/dexidp/dex v2.27.0
  • quay.io/argoproj/argocd v2.0.3
  • haproxy 2.0.20-alpine
  • redis 6.2.1-alpine
distribution/cert-manager/base/kustomization.yaml
  • quay.io/jetstack/cert-manager-controller v1.4.0
  • quay.io/jetstack/cert-manager-cainjector v1.4.0
  • quay.io/jetstack/cert-manager-webhook v1.4.0
distribution/knative/kustomization.yaml
  • gcr.io/knative-releases/knative.dev/serving/cmd/activator sha256:91e67a579378fa39d7c941e379db183464c3add3d53b4617f65d9cbc2f0c770a
  • gcr.io/knative-releases/knative.dev/serving/cmd/autoscaler sha256:761dc36210e69ebef3a64ce72ad9f54f8172e4aed6b97e8a706e3128956ec54d
  • gcr.io/knative-releases/knative.dev/serving/cmd/webhook sha256:268bd1383b56ba7b9acf391c681f7a63780c22dcd4555c2f4a7b61ec6da81cf4
  • gcr.io/knative-releases/knative.dev/serving/cmd/controller sha256:d772809059033e437d6e98248a334ded37b6f430c2ca23377875cc2459a3b73e
  • gcr.io/knative-releases/knative.dev/net-istio/cmd/controller sha256:17ee40a68cda50772375dcc4230efa99e7f8666a050ad2ffcd0338ff31c1bfaa
  • gcr.io/knative-releases/knative.dev/net-istio/cmd/webhook sha256:1da4b47f1778005b3cf07d384cac27c8c688628b9e7e631f15dd6ac3456c3039
  • gcr.io/knative-releases/knative.dev/eventing/cmd/controller sha256:b02cfc6d0858de1ae6d5d5acbe1ac2ed1c5411f2adcec417c2b113b3b3274e4a
  • gcr.io/knative-releases/knative.dev/eventing/cmd/webhook sha256:5f037fe6755fb85fb0a155f9892c8519a058dbf395d2d04b2c6769ffd2d68950
  • gcr.io/knative-releases/knative.dev/eventing/cmd/mtping sha256:ac30b62fa390b01c24a9bb891c4b8aa7a8c6c747a5182592d81af58d65eaa65c
distribution/kubeflow/katib/kustomization.yaml
  • docker.io/kubeflowkatib/katib-controller v0.11.1
  • docker.io/kubeflowkatib/katib-db-manager v0.11.1
  • docker.io/kubeflowkatib/katib-new-ui v0.11.1
  • mysql 8
distribution/kubeflow/kfserving/kustomization.yaml
  • gcr.io/kfserving/kfserving-controller v0.5.1
distribution/kubeflow/notebooks/central-dashboard/kustomization.yaml
  • public.ecr.aws/j1r0q0g6/notebooks/central-dashboard v1.3.0
distribution/kubeflow/notebooks/experimental-pvcviewer-controller/kustomization.yaml
  • davidspek/kubeflow-pvcviewer-controller 0.7
distribution/kubeflow/notebooks/experimental-volumes-web-app/kustomization.yaml
  • davidspek/volumes-web-app 0.5.4
distribution/kubeflow/notebooks/jupyter-web-app/kustomization.yaml
  • public.ecr.aws/j1r0q0g6/notebooks/jupyter-web-app v1.3.0
distribution/kubeflow/notebooks/notebook-controller/kustomization.yaml
  • public.ecr.aws/j1r0q0g6/notebooks/notebook-controller v1.3.0
distribution/kubeflow/notebooks/pod-defaults/kustomization.yaml
  • public.ecr.aws/j1r0q0g6/notebooks/admission-webhook v1.3.0
distribution/kubeflow/notebooks/profile-controller_access-management/kustomization.yaml
  • public.ecr.aws/j1r0q0g6/notebooks/access-management v1.3.0
  • public.ecr.aws/j1r0q0g6/notebooks/profile-controller v1.3.0
distribution/kubeflow/notebooks/tensorboard-controller/kustomization.yaml
distribution/kubeflow/notebooks/tensorboards-web-app/kustomization.yaml
  • public.ecr.aws/j1r0q0g6/notebooks/tensorboards-web-app v1.3.0
distribution/kubeflow/notebooks/volumes-web-app/kustomization.yaml
  • public.ecr.aws/j1r0q0g6/notebooks/volumes-web-app v1.3.0
distribution/kubeflow/operators/mpi/kustomization.yaml
distribution/kubeflow/operators/mxnet/kustomization.yaml
  • kubeflow/mxnet-operator v1.1.0
distribution/kubeflow/operators/pytorch/kustomization.yaml
distribution/kubeflow/operators/tensorflow/kustomization.yaml
distribution/kubeflow/operators/xgboost/kustomization.yaml
  • kubeflow/xgboost-operator v0.2.0
distribution/kubeflow/pipelines/kustomization.yaml
  • gcr.io/ml-pipeline/cache-deployer 1.6.0
  • gcr.io/ml-pipeline/cache-server 1.6.0
  • gcr.io/ml-pipeline/metadata-envoy 1.6.0
  • gcr.io/tfx-oss-public/ml_metadata_store_server 0.30.0
  • gcr.io/ml-pipeline/metadata-writer 1.6.0
  • gcr.io/ml-pipeline/api-server 1.6.0
  • gcr.io/ml-pipeline/persistenceagent 1.6.0
  • gcr.io/ml-pipeline/scheduledworkflow 1.6.0
  • gcr.io/ml-pipeline/frontend 1.6.0
  • gcr.io/ml-pipeline/viewer-crd-controller 1.6.0
  • gcr.io/ml-pipeline/visualization-server 1.6.0
  • gcr.io/ml-pipeline/mysql 5.7
  • gcr.io/ml-pipeline/workflow-controller v2.12.9-license-compliance
distribution/metallb/kustomization.yaml
  • metallb/controller v0.10.2
  • metallb/speaker v0.10.2
distribution/nginx/kustomization.yaml
distribution/rook-ceph/kustomization.yaml
  • rook/ceph v1.6.5
regex
distribution/argocd/base/kustomization.yaml
  • argoproj/argo-cd v2.0.3@8d2b13d733e1dff7d1ad2c110ed31be4804406e2
distribution/kubeflow/katib/kustomization.yaml
  • kubeflow/katib 0.11.1@036296a2e8e36e44077396fedd687953baf5dbc4
distribution/kubeflow/notebooks/central-dashboard/kustomization.yaml
  • kubeflow/kubeflow v1.3.0@0e91a2b9cd0c3b6687692b1f1f09ac6070cc6c3e
distribution/kubeflow/notebooks/jupyter-web-app/kustomization.yaml
  • kubeflow/kubeflow v1.3.0@0e91a2b9cd0c3b6687692b1f1f09ac6070cc6c3e
distribution/kubeflow/notebooks/notebook-controller/kustomization.yaml
  • kubeflow/kubeflow v1.3.0@0e91a2b9cd0c3b6687692b1f1f09ac6070cc6c3e
distribution/kubeflow/notebooks/pod-defaults/kustomization.yaml
  • kubeflow/kubeflow v1.3.0@0e91a2b9cd0c3b6687692b1f1f09ac6070cc6c3e
distribution/kubeflow/notebooks/profile-controller_access-management/kustomization.yaml
  • kubeflow/kubeflow v1.3.0@0e91a2b9cd0c3b6687692b1f1f09ac6070cc6c3e
distribution/kubeflow/notebooks/tensorboard-controller/kustomization.yaml
  • kubeflow/kubeflow v1.3.0@0e91a2b9cd0c3b6687692b1f1f09ac6070cc6c3e
distribution/kubeflow/notebooks/tensorboards-web-app/kustomization.yaml
  • kubeflow/kubeflow v1.3.0@0e91a2b9cd0c3b6687692b1f1f09ac6070cc6c3e
distribution/kubeflow/notebooks/volumes-web-app/kustomization.yaml
  • kubeflow/kubeflow v1.3.0@0e91a2b9cd0c3b6687692b1f1f09ac6070cc6c3e
distribution/kubeflow/operators/mxnet/kustomization.yaml
  • kubeflow/mxnet-operator v1.1.0@905d519e1bdc7d2d95131a5fa65fa0de83932fc9
distribution/kubeflow/operators/pytorch/kustomization.yaml
  • kubeflow/pytorch-operator v0.7.0@2aae331f8b31e95c3a187ec07a93d8d11fc7bb78
distribution/kubeflow/operators/tensorflow/kustomization.yaml
  • kubeflow/tf-operator v1.1.0@f564bce4ac856d347ef1e3f8b131d10740d54972
distribution/kubeflow/operators/xgboost/kustomization.yaml
  • kubeflow/xgboost-operator v0.2.0@579a656311423762f21873d6ecbbc87fcdff628f
distribution/kubeflow/pipelines/kustomization.yaml
  • kubeflow/pipelines 1.6.0@1c66f93f5149a8d5ed7f33895d3ebc01e662d837
distribution/metallb/kustomization.yaml
  • metallb/metallb v0.10.2@99469a412510da616538825c7a5ecb1ff0dbc59d
distribution/cert-manager/base/kustomization.yaml
  • jetstack/cert-manager v1.4.0
distribution/rook-ceph/kustomization.yaml
  • rook/rook v1.6.5
  • rook/rook v1.6.5
  • rook/rook v1.6.5
  • kubernetes-csi/external-snapshotter v4.1.1
  • kubernetes-csi/external-snapshotter v4.1.1
  • kubernetes-csi/external-snapshotter v4.1.1
  • kubernetes-csi/external-snapshotter v4.1.1
  • kubernetes-csi/external-snapshotter v4.1.1
  • rook/rook v1.6.5
  • rook/rook v1.6.5
  • rook/rook v1.6.5
  • rook/rook v1.6.5
  • rook/rook v1.6.5
  • rook/rook v1.6.5
  • rook/rook v1.6.5
  • rook/rook v1.6.5
distribution/rook-ceph/cluster-patch.yaml
  • ceph/ceph v16.2.4
distribution/argocd-applications/istio-operator.yaml
  • istio/operator 1.10.1
distribution/istio/istio-spec.yaml
  • istio/operator 1.10.1

  • Check this box to trigger a request for Renovate to run again on this repository

"RBAC: access denied" when accessing the dashboard

I manage to install but when accessing the dashboard through "https://kubeflow.test-kubeflow.com:31186"
I got "RBAC: access denied"

Authorizationpolicy and gateway specs

my-prod ~]$ kubectl -nistio-system get svc istio-ingressgateway -o yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: istio-ingressgateway
    install.operator.istio.io/owning-resource: istio
    install.operator.istio.io/owning-resource-namespace: istio-system
    istio: ingressgateway
    istio.io/rev: default
    operator.istio.io/component: IngressGateways
    operator.istio.io/managed: Reconcile
    operator.istio.io/version: 1.10.1
    release: istio
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:kubectl.kubernetes.io/last-applied-configuration: {}
        f:labels:
          f:app: {}
          f:install.operator.istio.io/owning-resource: {}
          f:install.operator.istio.io/owning-resource-namespace: {}
            f:targetPort: {}
          k:{"port":443,"protocol":"TCP"}:
            .: {}
            f:name: {}
            f:port: {}
            f:protocol: {}
            f:targetPort: {}
          k:{"port":15021,"protocol":"TCP"}:
            .: {}
            f:name: {}
            f:port: {}
            f:protocol: {}
            f:targetPort: {}
        f:selector:
          f:app: {}
          f:istio: {}
    manager: istio-operator
    operation: Apply
    time: "2021-07-06T13:41:43Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec:
        f:type: {}
    manager: kubectl-edit
    operation: Update
    time: "2021-07-06T13:46:06Z"
  name: istio-ingressgateway
  namespace: istio-system
  resourceVersion: "7238"
  uid: 5b57ebf1-9065-42a5-bdfb-eb6501b8d21d
spec:
  clusterIP: 172.30.214.224
  clusterIPs:
  - 172.30.214.224
  externalTrafficPolicy: Cluster
  ports:
  - name: status-port
    nodePort: 30293
    port: 15021
    protocol: TCP
    targetPort: 15021
  - name: http2
    nodePort: 31935
    port: 80
    protocol: TCP
    targetPort: 8080
  - name: https
    nodePort: 31186
    port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    app: istio-ingressgateway
    istio: ingressgateway
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}
my-prod ~]$
my-prod ~]$
my-prod ~]$
my-prod ~]$ kubectl -nauth get gateway auth-gateway -o yaml
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  labels:
    app.kubernetes.io/instance: oidc-auth
  managedFields:
  - apiVersion: networking.istio.io/v1alpha3
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
        f:labels:
          .: {}
          f:app.kubernetes.io/instance: {}
      f:spec:
        .: {}
        f:selector:
          .: {}
          f:istio: {}
        f:servers: {}
    manager: argocd-application-controller
    operation: Update
  name: auth-gateway
  namespace: auth
  resourceVersion: "4662"
spec:
  selector:
    istio: ingressgateway
  servers:
  - hosts:
    - auth.test-kubeflow.com
    port:
      name: http
      number: 80
      protocol: HTTP
    tls:
      httpsRedirect: true
  - hosts:
    - auth.test-kubeflow.com
    port:
      name: https
      number: 443
      protocol: HTTPS
    tls:
      credentialName: auth-ingressgateway-certs
      mode: SIMPLE
[my-prod ~]$
[my-prod ~]$
[my-prod ~]$ kubectl -nauth get authorizationpolicy auth-allow-access -o yaml
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  annotations:
  generation: 1
  labels:
    app.kubernetes.io/instance: oidc-auth
  managedFields:
  - apiVersion: security.istio.io/v1beta1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
        f:labels:
          .: {}
          f:app.kubernetes.io/instance: {}
      f:spec:
        .: {}
        f:action: {}
        f:rules: {}
    manager: argocd-application-controller
    operation: Update
    time: "2021-07-05T04:41:14Z"
  name: auth-allow-access
  namespace: auth
  resourceVersion: "3692"
  selfLink: /apis/security.istio.io/v1beta1/namespaces/auth/authorizationpolicies/auth-allow-access
  uid: b2f27739-8a78-4143-b69d-a2382bd23110
spec:
  action: ALLOW
  rules:
  - {}
[my-prod ~]$
[my-prod ~]$
[my-prod ~]$
[my-prod ~]$ kubectl -nistio-system get authorizationpolicy auth-allow-in-cluster-redirect -o yaml
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  annotations:
  generation: 1
  labels:
    app.kubernetes.io/instance: oidc-auth
  managedFields:
  - apiVersion: security.istio.io/v1beta1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
        f:labels:
          .: {}
          f:app.kubernetes.io/instance: {}
      f:spec:
        .: {}
        f:action: {}
        f:rules: {}
        f:selector:
          .: {}
          f:matchLabels:
            .: {}
            f:app: {}
            f:istio: {}
    manager: argocd-application-controller
    operation: Update
    time: "2021-07-05T04:41:14Z"
  name: auth-allow-in-cluster-redirect
  namespace: istio-system
spec:
  action: ALLOW
  rules:
  - to:
    - operation:
        hosts:
        - auth.test-kubeflow.com
        - kubeflow.test-kubeflow.com
        - serving.test-kubeflow.com
        - '*.serving.test-kubeflow.com'
  selector:
    matchLabels:
      app: istio-ingressgateway
      istio: ingressgateway
[my-prod ~]$
[my-prod ~]$
[my-prod ~]$
[my-prod ~]$ kubectl -nistio-system get authorizationpolicy istio-ingressgateway -o yaml
      apiVersion: security.istio.io/v1beta1
      kind: AuthorizationPolicy
      metadata:
        generation: 1
        labels:
          app.kubernetes.io/instance: istio-resources
        managedFields:
        - apiVersion: security.istio.io/v1beta1
          fieldsType: FieldsV1
          fieldsV1:
            f:metadata:
              f:annotations:
                .: {}
                f:kubectl.kubernetes.io/last-applied-configuration: {}
              f:labels:
                .: {}
                f:app.kubernetes.io/instance: {}
            f:spec:
              .: {}
              f:action: {}
              f:provider:
                .: {}
                f:name: {}
              f:rules: {}
              f:selector:
                .: {}
                f:matchLabels:
                  .: {}
                  f:app: {}
                  f:istio: {}
          manager: argocd-application-controller
          operation: Update
          time: "2021-07-05T04:41:15Z"
        name: istio-ingressgateway
        namespace: istio-system
      spec:
        action: CUSTOM
        provider:
          name: oauth2-proxy
        rules:
        - to:
          - operation:
              hosts:
              - kubeflow.test-kubeflow.com
              - serving.test-kubeflow.com
        selector:
          matchLabels:
            app: istio-ingressgateway
            istio: ingressgateway

Thanks.

Issue with standalone cert-manager install

Getting error while trying this standalone install command mentioned in kubeflow/kubeflow#5803 (comment)

kustomize build github.com/argoflow/argoflow/cert-manager | kubectl apply -f -
Error

-2021/04/27 02:39:27 evalsymlink failure on '/tmp/kustomize-548914861/releases/download/v1.3.1/cert-manager.yaml' : lstat /tmp/kustomize-548914861/releases: no such file or directoryError: accumulating resources: accumulating resources from 'https://github.com/jetstack/cert-manager/releases/download/v1.3.1/cert-manager.yaml': evalsymlink failure on '/tmp/kustomize-752007942/cert-manager/https:/github.com/jetstack/cert-manager/releases/download/v1.3.1/cert-manager.yaml' : lstat /tmp/kustomize-752007942/cert-manager/https:: no such file or directoryerror: no objects passed to apply

Thanks in advance for your help!

rook-ceph deployments fails for non-HA clusters

I encountered the problem of rook-ceph failing to deploy on non-HA clusters. As the reason is not immediately apparent, I suggest adding a single node rook-ceph config for testing environments.

authn-filter not syncing

When I try to get the Envoy filter to apply I get the following error 👍

error validating data: ValidationError(EnvoyFilter.spec.configPatches[0]): unknown field "listener" in io.istio.networking.v1alpha3.EnvoyFilter.spec.configPatches

The following part does not apply, starting at line 33

listener:
  filterChain:
    filter:
     name: envoy.http_connection_manager
     subFilter:
       name: ''

I'm not sure why this is, and what I did wrong to get this error.

Running on Kubernetes RKE 1.20.6 on Ubuntu 20.10
ArgoCD v2.0.3+8d2b13d

Question: Migrate to Managed MySQL (RDS)

I'm looking to migrate to a managed MySQL database and use RDS. Documentation I found assumes you are about to deploy from the start and helps you migrate to AWS or Azure, but I'm wondering if anybody has migrated or pointed to a managed db, even if that means loosing data.

Thanks in advance.

Exception Deploying Stack using ArgoCD v1.8.7 and Kustomize v4.0.5 custom-tool

Hello everyone,

I have encountered the following error for the Dex Istio Application resource after trying to deploy using ArgoCD version 1.2.

rpc error: code = Unknown desc = Manifest generation error (cached): 
`kustomize build /tmp/https:__github.com_argoflow_argoflow/kubeflow/common/dex-istio --enable_alpha_plugins` failed exit status 1: 
Error: accumulating resources: accumulateFile "accumulating resources from 'github.com/kubeflow/manifests/common/dex/overlays/istio': evalsymlink failure on '/tmp/https:__github.com_argoflow_argoflow/kubeflow/common/dex-istio/github.com/kubeflow/manifests/common/dex/overlays/istio' : 
lstat /tmp/https:__github.com_argoflow_argoflow/kubeflow/common/dex-istio/github.com: no such file or directory", accumulateDirector: "recursed accumulation of path '/tmp/kustomize-166861452/repo': accumulating resources: accumulateFile \"accumulating resources from '../../base': 
evalsymlink failure on '/tmp/base' : lstat /tmp/base: no such file or directory\", loader.New \"error loading ../../base with git: url lacks host: ../../base, dir: evalsymlink failure on '/tmp/base' : lstat /tmp/base: no such file or directory, get: invalid source string: ../../base\""

The kustomize 4.0.5 is not present in my current Argo Server, could this be the cause for the error?
All the other components have started successfully, so my hunch is that Kustomize 4.0.5 is definitely a dependency for building this stack.

Any ideas?
Thanks!

Accessing Central Dashboard

/kind question

Hello everyone,

I've gone through the steps to fork the repo and apply the components I need as outlined in the readme. However, I'm having some issues configuring the dashboard. I want to setup an ingress gateway that forwards the request from the external load balancer to the service that hosts the dashboard. I'm just unclear what destination host to use to achieve this.

Please let me know if I can clarify my question further.

Thanks in advance!

incorrect installation steps overview in readme

The readme currently states:

Overview of the steps:

  • fork this repo
  • modify the kustomizations for your purpose
  • run ./setup_repo.sh <your_repo_fork_url>
  • commit and push your changes
  • install ArgoCD
  • run kubectl apply -f kubeflow.yaml

./setup_repo.sh <your_repo_fork_url> is incorrect. Referencing the setup script as well as the argoflow-aws readme the correct steps should be something like:

Overview of the steps:

  • fork this repo
  • modify the kustomizations for your purpose
  • create a setup.conf file (example) in the root of the repo
  • run ./setup_repo.sh setup.conf
  • commit and push your changes
  • install ArgoCD
  • run kubectl apply -f kubeflow.yaml

Experimental Volume Web App on Kubeflow 1.4

Hi @davidspek, mind if I ask whether your experiment volume web app and pvc viewer controller works with kubeflow 1.4?

I tried it and always got a 404 error saying:

10.233.109.223 - - [06/Jan/2022:07:22:23 +0000] "GET /api/namespaces/kubeflow-user-example-com/pvcs HTTP/1.1" 404 145 "https://10.66.146.35:31779/volumes/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:94.0) Gecko/20100101 Firefox/94.0"
2022-01-06 07:22:24,741 | kubeflow.kubeflow.crud_backend.authn | INFO | Handling request for user: [email protected]
2022-01-06 07:22:24,741 | kubeflow.kubeflow.crud_backend.csrf | INFO | Skipping CSRF check for safe method: GET
2022-01-06 07:22:24,768 | kubeflow.kubeflow.crud_backend.errors.handlers | ERROR | An error occured talking to k8s while working on http://10.66.146.35:31779/api/namespaces/kubeflow-user-example-com/pvcs: (404)
Reason: Not Found
HTTP response headers: HTTPHeaderDict({'Cache-Control': 'no-cache, private', 'Content-Type': 'text/plain; charset=utf-8', 'X-Content-Type-Options': 'nosniff', 'X-Kubernetes-Pf-Flowschema-Uid': '6d69d16e-7b69-4b1c-92fc-aa51a5195563', 'X-Kubernetes-Pf-Prioritylevel-Uid': '91be2f99-9e75-404e-b2cb-8d51d8442856', 'Date': 'Thu, 06 Jan 2022 07:22:24 GMT', 'Content-Length': '19'})
HTTP response body: 404 page not found

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.