Code Monkey home page Code Monkey logo

skooner's Introduction

Skooner - Kubernetes Dashboard

We are changing our name from k8dash to Skooner! Please bear with us as we update our documentation and codebase to reflect this change. If you previously installed k8dash, you will need to uninstall it from your cluster and install Skooner instead. For most cases this can be done by running the following kubectl delete deployment,service k8dash

Skooner is the easiest way to manage your Kubernetes cluster. Skooner is now a sandbox project of the Cloud Native Computing Foundation!

  • Full cluster management: Namespaces, Nodes, Pods, Replica Sets, Deployments, Storage, RBAC and more
  • Blazing fast and Always Live: no need to refresh pages to see the latest cluster status
  • Quickly visualize cluster health at a glance: Real time charts help quickly track down poorly performing resources
  • Easy CRUD and scaling: plus inline API docs to easily understand what each field does
  • 100% responsive (runs on your phone/tablet)
  • Simple OpenID integration: no special proxies required
  • Simple installation: use the provided yaml resources to have skooner up and running in under 1 minute (no, seriously)
  • See Skooner in action:
    Skooner - Kubernetes Dashboard

Table of Contents

Prerequisites

(Back to Table of Contents)

Getting Started

Deploy Skooner with something like the following...

NOTE: never trust a file downloaded from the internet. Make sure to review the contents of kubernetes-skooner.yaml before running the script below.

kubectl apply -f https://raw.githubusercontent.com/skooner-k8s/skooner/master/kubernetes-skooner.yaml

To access skooner, you must make it publicly visible. If you have an ingress server setup, you can accomplish by adding a route like the following:

kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: skooner
  namespace: kube-system
spec:
  rules:
    - host: skooner.example.com
      http:
        paths:
          - path: /
            backend:
              service:
                name: skooner
                port:
                  number: 80
            pathType: ImplementationSpecific

Note: networking.k8s.io/v1 Ingress is required for Kubernetes v1.22+; extensions/v1beta1 Ingress is deprecated in v1.14+ and unavailable in v1.22+.

(Back to Table of Contents)

kubectl proxy

Unfortunately, kubectl proxy cannot be used to access Skooner. According to this comment, it seems that kubectl proxy strips the Authorization header when it proxies requests.

this is working as expected. "proxying" through the apiserver will not get you standard proxy behavior (preserving Authorization headers end-to-end), because the API is not being used as a standard proxy

(Back to Table of Contents)

Logging in

There are multiple options for logging into the dashboard: Service Account Token, OIDC, and NodePort.

Service Account Token

The first (and easiest) option is to create a dedicated service account. In the command line:

# Create the service account in the current namespace (we assume default)
kubectl create serviceaccount skooner-sa

# Give that service account root on the cluster
kubectl create clusterrolebinding skooner-sa --clusterrole=cluster-admin --serviceaccount=default:skooner-sa

# For Kubernetes v1.21 or lower
# Find the secret that was created to hold the token for the SA
kubectl get secrets

# Show the contents of the secret to extract the token
kubectl describe secret skooner-sa-token-xxxxx

# For Kubernetes v1.22 or higher
kubectl create token skooner-sa

Copy the token value from the secret, and enter it into the login screen to access the dashboard.

OIDC

Skooner makes using OpenId Connect for authentication easy. Assuming your cluster is configured to use OIDC, all you need to do is create a secret containing your credentials and apply kubernetes-skooner-oidc.yaml.

To learn more about configuring a cluster for OIDC, check out these great links

You can deploy Skooner with OIDC support using something like the following script...

NOTE: never trust a file downloaded from the internet. Make sure to review the contents of kubernetes-skooner-oidc.yaml before running the script below.

OIDC_URL=<put your endpoint url here... something like https://accounts.google.com>
OIDC_ID=<put your id here... something like blah-blah-blah.apps.googleusercontent.com>
OIDC_SECRET=<put your oidc secret here>

kubectl create secret -n kube-system generic skooner \
--from-literal=url=$OIDC_URL \
--from-literal=id=$OIDC_ID \
--from-literal=secret=$OIDC_SECRET

kubectl apply -f https://raw.githubusercontent.com/skooner-k8s/skooner/master/kubernetes-skooner-oidc.yaml

Additionally, you can provide other OIDC options via these environment variables:

  • OIDC_SCOPES: The default value for this value is openid email, but additional scopes can also be added using something like OIDC_SCOPES="openid email groups"
  • OIDC_METADATA: Skooner uses the excellent node-openid-client module. OIDC_METADATA will take a JSON string and pass it to the Client constructor. Docs here. For example, OIDC_METADATA='{"token_endpoint_auth_method":"client_secret_post"}

NodePort

If you do not have an ingress server setup, you can utilize a NodePort service as configured in kubernetes-skooner-nodeport.yaml. This is ideal when creating a single node master, or if you want to get up and running as fast as possible.

This will map Skooner port 4654 to a randomly selected port on the running node. The assigned port can be found using:

$ kubectl get svc --namespace=kube-system

NAME       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
skooner     NodePort    10.107.107.62   <none>        4654:32565/TCP   1m

Metrics

Skooner relies heavily on metrics-server to display real time cluster metrics. It is strongly recommended to have metrics-server installed to get the best experience from Skooner.

(Back to Table of Contents)

Development

You will need:

  • A running Kubernetes cluster
    • Installing and running minikube is an easy way to get this.
    • Once minikube is installed, you can run it with the command minikube start --driver=docker
  • Once the cluster is up and running, create some login credentials as described above

(Back to Table of Contents)

Skooner Architecture

Server

To run the server, run npm i from the /server directory to install dependencies and then npm start to run the server. The server is a simple express.js server that is primarily responsible for proxying requests to the Kubernetes api server.

During development, the server will use whatever is configured in ~/.kube/config to connect the desired cluster. If you are using minikube, for example, you can run kubectl config set-context minikube to get ~/.kube/config set up correctly.

Client

The client is a React application (using TypeScript) with minimal other dependencies.

To run the client, open a new terminal tab and navigate to the /client directory, run npm i and then npm start. This will open up a browser window to your local Skooner dashboard. If everything compiles correctly, it will load the site and then an error message will pop up Unhandled Rejection (Error): Api request error: Forbidden.... The error message has an 'X' in the top righthand corner to close that message. After you close it, you should see the UI where you can enter your token.

(Back to Table of Contents)

Troubleshooting

Recommendation for keycloak configuration:

  1. Set OIDC_URL to keycloak OpenId endpoint configuration page.
  • OIDC_URL=https://{keycloak_domain}/realms/foo/.well-known/openid-configuration
  • Also set $OIDC_ID locally with OIDC_ID={client_id}
  • You can get $OIDC_SECRET from keycloak
    • (You need to set the Client authentication toggle to be on, for older version of keycloaks you should switch access type to confidential ) img.png
    • img.png
  1. While creating secret, use correct var name and use skooner namespace (by default it's kube-system):
kubectl create secret generic skooner \
--from-literal=url=$OIDC_URL \
--from-literal=id=$OIDC_ID \
--from-literal=secret=$OIDC_SECRET \
--namespace=kube-system
  1. following that, redeploy skooner server with kubectl apply -f https://raw.githubusercontent.com/skooner-k8s/skooner/master/kubernetes-skooner-oidc.yaml

  2. Make sure skooner is running by checking kubectl rollout status deploy/skooner --namespace=kube-system If not, report error with logging in kubectl describe pod skooner --namespace=kube-system

  3. [Optional] create an ingress for skooner, you can take provision/keycloak/skooner-ingress.yaml as an example

  4. visit skooner, check if login succeeded

  5. [Trouble Shooting] If the api call returns 403 with a message containing some error like: User \"system:anonymous\" cannot list resource \"selfsubjectrulesreviews\" in API group \"authorization.k8s.io\" at the cluster scope"

    • it means you'll need a cluster role bond. You can take provision/keycloak/skooner-oidc-patch.yaml as an example
    • @elieassi suggests create a serviceaccount separately, I feel like it's more secure but I hadn't test it out. See this issue for more details
  6. If failed, please report both client and server error. Client error: check browser console and send a screenshot Server error: check logs by kubectl logs deploy/skooner --namespace=kube-system Note that RequestError: connect ECONNREFUSED may indicate a configuration issue rather than Skooner's issue.

License

Apache License 2.0

FOSSA Status

(Back to Table of Contents)

skooner's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

skooner's Issues

istio crd issues

Found that once the Istio mesh has been installed crd's keep reporting in k8s as being an issue, when in reality they have completed.

istio-crd-issue

No logs in dashboard

Hi. I cant seem to get any logs on the pods. As a matter of fact there are no logs for anything. I am running k8dash as described in the README and all went well with no issues returned. My only issue is logs missing or just seems to be a dead feature for now. My developers rely heavily on logs to trace the issues in a pod. Please help.
Kind Regards
Darrell

read-only account

Hi,

I'm trying to login with a read-only account into k8dash with no success.

Steps to reproduce:

1.- create ServiceAccount

apiVersion: v1
kind: ServiceAccount
metadata:
  name: k8dash-cluster-reader
  namespace: default

2.- create ClusterRole

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: cluster-reader
rules:
- apiGroups: ["*"]
  resources: ["*"]
  verbs: [
    "get",
    "list",
    "proxy",
    "redirect",
    "watch"
  ]
- nonResourceURLs: ["*"]
  verbs: ["get"]

3.- create ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: k8dash-cluster-reader
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-reader
subjects:
- kind: ServiceAccount
  name: k8dash-cluster-reader
  namespace: default

If I modify the ClusterRole verbs by verbs: ["*"] I'm able to login, and if I replace the ClusterRole definition with this ClusterRole definition while I'm logged, everything works as expected, so I think the problem could be the login check...

Any suggestions?

Thanks in advance,
Joan

Auth issues - call to /tokenreviews fails

Environment:
AKS (K8s version 1.12.6)

With ingress (Nginx):
Login page is loaded (GET) but any POST fails because endpoint returns 404.
Error message: Error occured attempting to login.
Instead of contacting API, request is routed back to web.

Request URL: https://something.com/apis/authentication.k8s.io/v1/tokenreviews
Request Method: POST
Status Code: 404

Logs:

OIDC_URL:  None
[HPM] Proxy created: /  ->  https://something.hcp.westeurope.azmk8s.io:443
Server started
GET /
GET /static/css/2.7b1d7de3.chunk.css
GET /static/js/2.ab8f1278.chunk.js
GET /static/css/main.a9446ed5.chunk.css
GET /static/js/main.c1206f38.chunk.js
GET /static/css/2.7b1d7de3.chunk.css.map
GET /static/css/main.a9446ed5.chunk.css.map
GET /static/js/2.ab8f1278.chunk.js.map
GET /oidc
GET /static/js/main.c1206f38.chunk.js.map
GET /favicon.ico
GET /manifest.json
GET /
POST /apis/authentication.k8s.io/v1/tokenreviews
GET /

The same thing happens when port-forwarded.

Request URL: http://localhost:4654/apis/authentication.k8s.io/v1/tokenreviews
Request Method: POST
Status Code: 404 Not Found

Helm chart?

Hi.

Do you plan to create a helm chart for it? Will be easier to test.

Thx.

the UI does not show data after login

The dashboard UI does not show any data after login. Sometime it just hangs/stucks after pressing login button with the secrete token .
We have installed the NodePort version of the k8dash (kubernetes-k8dash-nodeport.yaml).
Please check and assists at earliest .

OS : Ubuntu 16.04.5 LTS

root@ubuntu03: kubectl get svc k8dash -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
k8dash NodePort 10.111.223.113 4654:31330/TCP 12h
root@ubuntu03:

Brower URL :
http://192.168.0.17:31330/#!
192.168.0.17 is a worker node .

root@ubuntu03: kubectl get nodes --all-namespaces
NAME STATUS ROLES AGE VERSION
ubuntu01 Ready 5d2h v1.16.2
ubuntu02 Ready 12d v1.16.2
ubuntu03 Ready master 12d v1.16.2
root@ubuntu03:

root@ubuntu03:/new-softwares/metrics-server/deploy/1.8+# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-dc6cb64cb-x97cv 1/1 Running 4 12d
calico-node-9zzkq 0/1 Running 1 5d2h
calico-node-csx7b 1/1 Running 4 12d
calico-node-p2lvb 1/1 Running 1 12d
coredns-5644d7b6d9-7fpl8 1/1 Running 4 12d
coredns-5644d7b6d9-l69xb 1/1 Running 4 12d
etcd-ubuntu03 1/1 Running 4 12d
k8dash-58c77f9d45-dg84m 1/1 Running 0 113s
kube-apiserver-ubuntu03 1/1 Running 4 12d
kube-controller-manager-ubuntu03 1/1 Running 4 12d
kube-proxy-46986 1/1 Running 1 5d2h
kube-proxy-7cg6x 1/1 Running 1 12d
kube-proxy-gzg2d 1/1 Running 4 12d
kube-scheduler-ubuntu03 1/1 Running 4 12d
kubernetes-dashboard-7c9d8bcbbc-wrk99 1/1 Running 3 10d
metrics-server-8fc66cfdf-mkfhv 1/1 Running 0 21s
root@ubuntu03:/new-softwares/metrics-server/deploy/1.8+#

k8dash pod logs

root@ubuntu03:/new-softwares/metrics-server/deploy/1.8+# kubectl logs k8dash-58c77f9d45-dg84m -n kube-system
OIDC_URL: None
API URL: https://10.96.0.1:443
[HPM] Proxy created: / -> https://10.96.0.1:443
[HPM] Subscribed to http-proxy events: [ 'error', 'close' ]
Server started
(node:6) [DEP0123] DeprecationWarning: Setting the TLS ServerName to an IP address is not permitted by RFC 6066. This will be ignored in a future version.
GET / 200
GET / 200
GET / 200
GET / 200
GET / 200
GET / 200
GET / 200
GET / 200
GET / 200
GET / 200
Error getting cluster info Error: connect ETIMEDOUT 10.96.0.1:443
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1054:14) {
errno: 'ETIMEDOUT',
code: 'ETIMEDOUT',
syscall: 'connect',
address: '10.96.0.1',
port: 443
}
GET / 200
GET / 200
GET / 200
GET / 200
GET / 200
GET / 200
root@ubuntu03:/new-softwares/metrics-server/deploy/1.8+#

K8 - VERSION:
root@ubuntu03:/new-softwares/metrics-server/deploy/1.8+# kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:18:23Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:09:08Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
root@ubuntu03:/new-softwares/metrics-server/deploy/1.8+#

Browser Screenshot

image

Cannot View Workloads Tab as Non-admin

Hi there,

I really dig k8dash, thank you! I want to allow our developers to use it as they prefer a UI over command line. Our developer role has read-only everything to their namespace. I had to add a cluster role for them to list all namespaces so they can select the namespace they have access to in order to see anything. Once the namespace is selected it only shows pods. There's no WORKLOADS tab displayed. What clusterroles/roles are necessary to see the WORKLOADS tab?

image

Thanks!

Support passing bearer token in header

Without logging in via the UI, if I pass a Bearer token with a request it redirects to the token login screen. It would be nice if it could recognize I already have a token and use that without needing to go through the browser login flow

Browser user/password?

On running the install as per the readme I get prompted for a basic auth user & password.

This prevents me from getting to enter in the auth token

edit: forgot to mention I was trying to access it doing a kubectl port-forward service/k8dash 8080:80

Installation on AKS

This is probably a question more than an issue.

I tried installing the k8dash on an Azure aks at version 1.12.6 with rbac enabled.
Following the manual I am unable to get the auth token (option 1) to work.

In the logs I see

OIDC_URL:  None
[HPM] Proxy created: /  ->  https://noise-dns-eea9be65.hcp.eastus.azmk8s.io:443
[HPM] Subscribed to http-proxy events:  [ 'error', 'close' ]
Server started
Version Info:  {
    "major": "1",
    "minor": "12",
    "gitVersion": "v1.12.6",
    "gitCommit": "ab91afd7062d4240e95e51ac00a18bd58fddd365",
    "gitTreeState": "clean",
    "buildDate": "2019-02-26T12:49:28Z",
    "goVersion": "go1.10.8",
    "compiler": "gc",
    "platform": "linux/amd64"
}
Available APIs:  [
    "admission.certmanager.k8s.io/v1beta1",
    "admissionregistration.k8s.io/v1beta1",
    "apiextensions.k8s.io/v1beta1",
    "apiregistration.k8s.io/v1",
    "apps/v1",
    "authentication.k8s.io/v1",
    "authorization.k8s.io/v1",
    "autoscaling/v1",
    "batch/v1",
    "certificates.k8s.io/v1beta1",
    "certmanager.k8s.io/v1alpha1",
    "coordination.k8s.io/v1beta1",
    "events.k8s.io/v1beta1",
    "extensions/v1beta1",
    "metrics.k8s.io/v1beta1",
    "networking.k8s.io/v1",
    "policy/v1beta1",
    "rbac.authorization.k8s.io/v1",
    "scheduling.k8s.io/v1beta1",
    "storage.k8s.io/v1"
]
Auth Response:  {
    "kind": "SelfSubjectAccessReview",
    "apiVersion": "authorization.k8s.io/v1",
    "metadata": {
        "creationTimestamp": null
    },
    "spec": {
        "resourceAttributes": {}
    },
    "status": {
        "allowed": false,
        "reason": "no RBAC policy matched"
    }
}
GET / 304
GET /static/css/main.74e8a81c.chunk.css 304
GET /static/css/2.7b1d7de3.chunk.css 304
GET /static/js/2.429c3e96.chunk.js 304
GET /static/js/main.41a4b71b.chunk.js 304
GET /oidc 304
GET /manifest.json 304
GET /favicon.ico 200
GET / 200
GET / 200
GET / 200
GET / 200
[HPM] POST /apis/authorization.k8s.io/v1/selfsubjectaccessreviews -> https://noise-dns-eea9be65.hcp.eastus.azmk8s.io:443
POST /apis/authorization.k8s.io/v1/selfsubjectaccessreviews 401

The "no RBAC policy matched" seems ominous. Is there any way you could help me get this awesome dashboard up and running?

Invalid credentials with a token when use kubectl proxy

Hello.
How to reproduce:
run kubectl proxy
go to 'http://localhost:8001/api/v1/namespaces/kube-system/services/http:k8dash:/proxy/' in a browser and try to log in
k8dash's logs:

[HPM] POST /apis/authorization.k8s.io/v1/selfsubjectrulesreviews -> https://10.96.0.1:443
POST /apis/authorization.k8s.io/v1/selfsubjectrulesreviews 403

apiserver's logs

Jun 19 00:14:14 kube-apiserver[6002]: I0619 00:14:14.253445    6002 handler.go:153] kube-aggregator: POST "/api/v1/namespaces/kube-sy
stem/services/http:k8dash:/proxy/apis/authorization.k8s.io/v1/selfsubjectrulesreviews" satisfied by nonGoRestful
Jun 19 00:14:14 kube-apiserver[6002]: I0619 00:14:14.254161    6002 pathrecorder.go:247] kube-aggregator: "/api/v1/namespaces/kube-sy
stem/services/http:k8dash:/proxy/apis/authorization.k8s.io/v1/selfsubjectrulesreviews" satisfied by prefix /api/
Jun 19 00:14:14 kube-apiserver[6002]: I0619 00:14:14.254495    6002 handler.go:143] kube-apiserver: POST "/api/v1/namespaces/kube-sys
tem/services/http:k8dash:/proxy/apis/authorization.k8s.io/v1/selfsubjectrulesreviews" satisfied by gorestful with webservice /api/v1
Jun 19 00:14:14 kube-apiserver[6002]: I0619 00:14:14.276633    6002 rbac.go:118] RBAC DENY: user "system:anonymous" groups ["system:u
nauthenticated"] cannot "create" resource "selfsubjectrulesreviews.authorization.k8s.io" cluster-wide
Jun 19 00:14:14 kube-apiserver[6002]: I0619 00:14:14.276984    6002 authorization.go:73] Forbidden: "/apis/authorization.k8s.io/v1/se
lfsubjectrulesreviews", Reason: ""
Jun 19 00:14:14 kube-apiserver[6002]: I0619 00:14:14.277770    6002 wrap.go:47] POST /apis/authorization.k8s.io/v1/selfsubjectrulesre
views: (1.328014ms) 403 [Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/74.0.3729.169 Chrome/74.0.3729.169 Safari/537.36 95.217.56.49:56004]

Running at a non-root URL

We run all cluster admin tools behind an ingress controller (traefik) and mount tools under their own subpaths such as /grafana and /graylog. Currently k8dash cannot run under a subpath because it uses absolute references to /js etc.

Options to resolve:

  • k8dash could use only relative URLs (might be difficult due to React routing)
  • Make URL base path configurable via environment/config variable (this is how Graylog does it)
  • Always run k8dash under /k8dash and just make the root page a redirect there (quick & dirty hack)

Unfortunately my React-mojo was not strong enough to put together a PR..

Feature Request: Show ready/not ready, node type via node icons

First great job on the dashboard -- seems a lot more lightweight and less buggy than the standard dashboard. We have a couple of minor UI requests:

On the nodes page it would be nice if unready nodes showed up by default on top, and maybe with a red icon, instead of the text 'READY' column. Even without the icon change, it would be good to float unready nodes to the top by default (we run on bare metal, so unready nodes are a big deal for us). We have alerts, obviously, but it still seems like a logical change to the UI.

It would also be great if on that same page it was a bit more obvious which nodes were masters. You can look at the labels, but an icon change (or replacing the READY column with a MASTER column) would be pretty nice.

Thanks again for all your great work!

Logs are repeated

I noticed that when we open pod logs page some times it will show the same logs multiple times. Do you know about this issue?

Feature Request: Authenticating with the service account

Great application.

A feature request:
If the pod is run with the service account, the frontend should not prompt for api token.
I tried starting it with the following spec:

kind: Deployment
apiVersion: apps/v1
metadata:
  name: {{ .Release.Namespace }}-{{ .Chart.Name }}
spec:
  replicas: 1
  selector:
    matchLabels:
      app: {{ .Release.Namespace }}-{{ .Chart.Name }}
  template:
    metadata:
      labels:
        app: {{ .Release.Namespace }}-{{ .Chart.Name }}
    spec:
      serviceAccountName: {{ .Release.Namespace }}-{{ .Chart.Name }}
      containers:
      - name: k8dash
        imagePullPolicy: Always
        image: {{ .Values.image.repository}}:{{ .Values.image.tag }}
[...]

Where serviceAccountName refers to the service account with cluster-admin permissions, but the login still prompts for the token. I can't easily use OIDC on EKS and i don't want to expose the token to users. It would really be nice to have this.

Namespace doesn't load

Hi

I feel really bad beause I'm always asking stuff so tried to find out where to fix but I realised that I'm not qualified enough. ๐Ÿ˜“

When I click one of namespaces, namespace doesn't get loaded.

Screenshot 2019-04-12 at 16 28 25

I get the following in k8dash logs.

[HPM] GET /api/v1/namespaces/[object%20Object] -> https://10.96.0.1:443
GET /api/v1/namespaces/[object%20Object] 404
[HPM] GET /api/v1/namespaces/[object%20Object]/events -> https://10.96.0.1:443
GET /api/v1/namespaces/[object%20Object]/events 200
[HPM] GET /api/v1/namespaces/[object%20Object]/events?watch=1&resourceVersion=597953 -> https://10.96.0.1:443
[HPM] Upgrading to WebSocket

Feature Request: Apply Manifests in Git Repository

This is a feature request to make K8Dash significantly more powerful than Kubernetes official dashboard.

  1. K8Dash should accept a environment variables required for reading Git Repository.

For example:

MANIFEST_GIT_URL=https://github.com/herbrandson/kubernetes-manifests.git
MANIFEST_GIT_USERNAME=herbrandson
MANIFEST_GIT_PASSWORD=asdf1234
  1. K8Dash should have a UI and a Log Window with a simple button: "Sync"

  2. K8Dash should pull all files in the Git repository using library like https://github.com/isomorphic-git/isomorphic-git

  3. K8Dash should apply all JSON and YAML files in that Git repository using Kubernetes Server-Side Apply API kubernetes/enhancements#555

  4. K8Dash should then write logs to the window in the UI.

Probleme using OIDC authentification

Hi when i try to use my oidc (keycloak) with k8dash it doesn't work.
In the pod logs i have:

 [HPM] POST /apis/authorization.k8s.io/v1/selfsubjectrulesreviews -> https://10.96.0.1:443                                                                                                                 โ”‚
โ”‚ POST /apis/authorization.k8s.io/v1/selfsubjectrulesreviews 403                                                                                                                                            โ”‚
โ”‚ GET /favicon.ico 200                                                                                                                                                                                      โ”‚
โ”‚ GET /static/js/2.db22b280.chunk.js.map 304                                                                                                                                                                โ”‚
โ”‚ GET /static/js/main.34226f17.chunk.js.map 304                                                                                                                                                             โ”‚
โ”‚ GET /static/css/main.0d6d7525.chunk.css.map 304                                                                                                                                                           โ”‚
โ”‚ GET /static/css/2.b522e268.chunk.css.map 304                                                                                                                                                              โ”‚
โ”‚ (node:8) UnhandledPromiseRejectionWarning: ReferenceError: next is not defined                                                                                                                            โ”‚
โ”‚     at getOidc (/usr/src/app/index.js:79:9)                                                                                                                                                               โ”‚
โ”‚     at processTicksAndRejections (internal/process/task_queues.js:89:5)                                                                                                                                   โ”‚
โ”‚ (node:8) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was n โ”‚
โ”‚ ot handled with .catch(). (rejection id: 5)                                                                                                                                                               โ”‚

and in the browser network tab for the path:
/apis/authorization.k8s.io/v1/selfsubjectrulesreviews i have the response:

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "selfsubjectrulesreviews.authorization.k8s.io is forbidden: User \"system:anonymous\" cannot create resource \"selfsubjectrulesreviews\" in API group \"authorization.k8s.io\" at the cluster scope",
  "reason": "Forbidden",
  "details": {
    "group": "authorization.k8s.io",
    "kind": "selfsubjectrulesreviews"
  },
  "code": 403
}

I don't understand why k8dash use the system:anonymous account.

I use k8s version 1.15.4

Feature Request: make color configurable

Would it be possible to make the main color of the webinterface configurable somewhere in the deployment?

Use-case: when running multiple instances of k8dash in test and production environments, I would like to make the differences between the environments visually very clear

Feature Request: Redeploy pods that are created by deployments/replicasets

Hi

Just small nice-to-have feature request.

I often want to redeploy pods created by deployments when I updated configmap or secrets.
I do the folllowing

kubectl patch deployment/nginx --type='json' -p='[{"op": "replace", "path": "/spec/template/metadata/annotations", "value": {"serial": "'$(date '+%Y%m%d%H%M%S')'"} }]' --record

The command is just an example. If we can get a button to insert/update something similar to deployments, that would be nice.

Create and Deploy Missing

I have not tried the actual kubernetes dashboard, but looks like we can deploy and create Pods from dashboard. Is this feature missing here? If so, any plans to add it?

Thanks,
Raghu

https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/#deploying-containerized-applications

Dashboard lets you create and deploy a containerized application as a Deployment and optional Service with a simple wizard. You can either manually specify application details, or upload a YAML or JSON file containing application configuration

helm chart not installing

Plain helm install of initial k8dash is failing:

$ helm install k8dash-0.0.1.tgz 
Error: validation failed: error validating "": error validating data: ValidationError(Service.spec.ports[0]): unknown field "targetport" in io.k8s.api.core.v1.ServicePort

Helm is latest/stable in this time (v2.14.0).

$ helm version
Client: &version.Version{SemVer:"v2.14.0", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.0", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"}

anonymous user

Hi,

I just wondering if we can have anonymous user to login to the dashboard without using token,
At the moment we us key-cloak out side the cluster to authenticate the read-only user but then after authentication we need to use the token again which it make it ugly.
is it possible to have the anonymous mode ?

Node CPU Use in the Dashboard not working

Hi,
I have used k8dash to create dashboard. Everything seems to work fine, except that Node CPU use and Node RAM use under Nodes tab is not working. Any ideas what could be missing?
Please see below relevant information. Thanks,
Raghu

ubuntu2:$ kubectl get pods -A -o wide | grep metrics-server
kube-system metrics-server-67db467b7b-89b6g 1/1 Running 0 5h12m 192.200.10.29 rack13-cluster-oam-2
ubuntu2:
$ kubectl get pods -A -o wide | grep k8dash
kube-system k8dash-8684c6bfbd-d9zj8 1/1 Running 0 11m 192.200.3.54 rack13-cluster-oam-1

image

Create fixed tags to prevent breaking external users

We are using the master tag and with the latest update from 4 days ago, this broke our k8Dash instance and users were brought in a infinite loop of login/logout.

We have opened the following PR: #33 which we compiled and are using the local image for now. We would like to move back to the public images while enforcing us to use a Fixed TAG version: (e.g v1.0.1) ?

Add ability to sort unready nodes to the top of the node list

Refined issue from #59. On the node list, it would be good to have unready nodes (not ready or unknown) to float to the top of the display.

For most users, this is probably the most important thing to see in a node list. In particular, we run on bare metal, so dead nodes are a big deal. Even with alerting, this seems like a sensible default.

Left panel autohide based on clusterrolebinding

Sorry for the description...

As an admin k8dash is great ! But for the users (once you can connect without cluster-admin role #19 )it would be great to be able to hire some of the left pane icons

For example : roles, nodes....

Maybe a sort a low rights profiles where we can define what can be seen

YAML formatting not working in Firefox

The YAML formatting seems to be broken in Firefox, it works for me in Chrome.

How to reproduce: go to some resource and click the EDIT button, everything is displayed in one line

It seems to me that this is a browser issue (Firefox ignoring newlines)

support ISTIO CRD

Hi,

The dashboard looks great but I'm just wondering if there is a plan to add CRD to the dashboard , especially I'm looking for Istio support. ATM thats a big gap for this dashboard

Can not see any log of pod online

ไผไธšๅพฎไฟกๆˆชๅ›พ_157262008622

logs of pod k8dash like:
[HPM] GET /api/v1/namespaces/production/pods/ipquery-v1-867579b494-5nrjs/log?container=ipquery-k8s&previous=false&tailLines=1000&follow=true -> https://10.96.0.1:443 GET /api/v1/namespaces/production/pods/ipquery-v1-867579b494-5nrjs/log?container=ipquery-k8s&previous=false&tailLines=1000&follow=true 403

maybe cannot connect to https://10.96.0.1:443๏ผŒ

OIDC groups not supported

it seems that oidc scope information is hardcoded. We are using oidc groups to authenticate against k8s. If oidc scope could be manually defined I think groups could work.

Keycloak support

Hi,

I'm using keycloak as an OIDC provider, does anyone succeed with k8dash ?

I still getting invalid credentials" in k8dash, but keycloak is working fine (use it for grafana and legacy kubernetes dashboard).

Just do a basic openid connect client.

Sorry for not getting more detailed, but if anyone had this issue....

Had also a look at the secret base64 encoding but it doesn't seem to be that.

Thanks

Auth error when the token are too large

Here we are having some auth issues, when the auth token is too large because of the groups list in the JWT. It is common in our network a user that are a member of a lot of LDAP groups.

image

In these requests, that last successful one was the oidc that returned the token in the response body.

In the next ones (all selfsubjectrulesreviews requests), it sends the same token returned in the previous oidc request as a Authorization HTTP header, and then, every request fails because of this big header.

Does this authorization token need to be sent in the header or it could be sent in another way?

Make token/password field fillable

As the field for the token is a password type field it should be fillable using the attribute autocomplete="password" this allows managers like 1pass or lastpass to keep hold of these keys and is much more secure than passing it around the traditional way.

Invalid credentials with oidc auth with dex

Hi,

I get invalid credentials error like below when authenticated with dex as an oidc-provider.

An error occured during the request { OpenIdConnectError: invalid_client (Invalid client credentials.)
    at Client.requestErrorHandler (/usr/src/app/node_modules/openid-client/lib/helpers/error_handler.js:16:11)
    at processTicksAndRejections (internal/process/next_tick.js:81:5)
  error: 'invalid_client',
  error_description: 'Invalid client credentials.' } POST /oidc
POST /oidc 500

If I turn off oidc auth, k8dash asks for token and it works if I enter a valid token.
Dex is authenticating with github.com and it works fine with kubectl.
Here is the kubectl settings

user:
    auth-provider:
      config:
        client-id: kubernetes
        client-secret: ZXhhbXBsZS1hcHAtc2VjcmV0
        extra-scopes: offline_access openid profile email groups
        id-token: REDACTED
        idp-certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMrakNDQWVLZ0F3SUJBZ0lKQU1lRXJhSHYzNXJWTUEwR0NTcUdTSWIzRFFFQkN3VUFNQkl4RURBT0JnTlYKQkFNTUIydDFZbVV0WTJFd0hoY05NVGt3TXpNeE1Ua3dPVEE0V2hjTk1Ua3dOREV3TVRrd09UQTRXakFTTVJBdwpEZ1lEVlFRRERBZHJkV0psTFdOaE1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBCjBkb2NjV3Zpb29xbDRVa05oejFCZ01KV25JU0w5TUExRm1ySEZ4U2hUYysrL1V0VURxMVVlU0xCRXpXTjNZZmcKQm5TQVNBQUNmS0lCRTBDRWJWdzhSTUtodXJReExGT0hQUDBodWtVRGkxNmVnaXBHSjI0WWdWcnJ4cUpVYWxsYQo2cUpaTkdsUHQ3SmxWdWtrSHRlY0hONjVneG0wQjBzMWtwV1VRNFh2L0E2ZldOaHVhV3VqYlRjRWx0SEFtQlJnCmtmMHpRYnV2ZCtMRnl3V0V2VDdBai9ua1FVZko1L21DOTQyUmlYVDNXdUtyc1g1a3F3ellrVU9xN2hOM1B1aVQKU1NYRm9JNUxqQWd5eDVqVEhubDdmb3JWSnhObDYvdEc2eFg4S3BxMmpST3FZSzlUWFdhSFlDVktQeTlMUTFuegpBNG9jTXQyRkFzREY4a2ZMUjBhK2l3SURBUUFCbzFNd1VUQWRCZ05WSFE0RUZnUVVMK1gzejRKWkhDZkg4Ry80Ckl0ZDhUdUZ5ZEV3d0h3WURWUjBqQkJnd0ZvQVVMK1gzejRKWkhDZkg4Ry80SXRkOFR1RnlkRXd3RHdZRFZSMFQKQVFIL0JBVXdBd0VCL3pBTkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQU1rTFB0dkZoZlZxM0VibUJFU3dER09ZdwpVYjFYS0VKb1JEVGV5dlozamZSWGhTVDlmdmM0bC9GMWVOd1ZKZnhXb0piUjdCU0JmbURiNzR5anBOcGVYS2xZClZVWnE1Mmx1dnlwNDlFNHJOQ1JHTDNzL0NjUnFnV0tqVmxKZWZGakg2TU8zYTZnM0NFZElGNXJSZi8zRXFGSDYKZm9tUkZ0MEw5NzZodmpGRXFyMlVYR01yTk1LMUN6YXJreDhaUXNkekwySGFhMzV6ei9aUG1PdFA1a2dzYUlMegpoSC9CQ215N242Q2pDVmx3UXZFRmFUOXVRRDZWa216eVNmQ29oaGo4WFYwanBMa2doeG12cGJRdzFDWmwvcDJSCkRwSTh3aCtNVkhGczMvZzNKa0lqUkU0SVJtV2ROWE5hWTBwMVVZUEVIMys3bDlDOXZTQ2Q3OXgvSTZtOVB3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
        idp-issuer-url: https://dex.example.com:32000
        refresh-token: ChlibzZjeDJyNnMzNWMzZjVoeWpuZm5oem8zEhltaWt3YmRxc3Eyem1qeHAyNmk2ZWlqYnd0
      name: oidc

And this is k8s yaml manifests

kind: Deployment
apiVersion: apps/v1
metadata:
  name: k8dash
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: k8dash
  template:
    metadata:
      labels:
        k8s-app: k8dash
    spec:
      hostAliases:
      - hostnames:
        - dex.example.com
        ip: 10.0.2.100
      containers:
      - name: k8dash
        image: herbrandson/k8dash:dev
        command:
        - sh
        - -c
        - |
          npm config set cafile /ca/dex-ca.pem
          /sbin/tini -- node .
        ports:
        - containerPort: 4654
        livenessProbe:
          httpGet:
            scheme: HTTP
            path: /
            port: 4654
          initialDelaySeconds: 30
          timeoutSeconds: 30
        env:
        - name: OIDC_URL
          valueFrom:
            secretKeyRef:
              name: k8dash
              key: url
        - name: OIDC_CLIENT_ID
          valueFrom:
            secretKeyRef:
              name: k8dash
              key: id
        - name: OIDC_SECRET
          valueFrom:
            secretKeyRef:
              name: k8dash
              key: secret
        - name: NODE_EXTRA_CA_CERTS
          value: /ca/dex-ca.pem
        - name: OIDC_SCOPES
          value: "openid email groups"
        volumeMounts:
        - name: cafile
          mountPath: /ca
      volumes:
      - name: cafile
        configMap:
          name: k8dash

---
kind: Service
apiVersion: v1
metadata:
  name: k8dash
  namespace: kube-system
spec:
  ports:
    - port: 80
      targetPort: 4654
  selector:
    k8s-app: k8dash

---
apiVersion: v1
data:
  dex-ca.pem: |
    -----BEGIN CERTIFICATE-----
    MIIC+jCCAeKgAwIBAgIJAMeEraHv35rVMA0GCSqGSIb3DQEBCwUAMBIxEDAOBgNV
    BAMMB2t1YmUtY2EwHhcNMTkwMzMxMTkwOTA4WhcNMTkwNDEwMTkwOTA4WjASMRAw
    DgYDVQQDDAdrdWJlLWNhMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA
    0doccWviooql4UkNhz1BgMJWnISL9MA1FmrHFxShTc++/UtUDq1UeSLBEzWN3Yfg
    BnSASAACfKIBE0CEbVw8RMKhurQxLFOHPP0hukUDi16egipGJ24YgVrrxqJUalla
    6qJZNGlPt7JlVukkHtecHN65gxm0B0s1kpWUQ4Xv/A6fWNhuaWujbTcEltHAmBRg
    kf0zQbuvd+LFywWEvT7Aj/nkQUfJ5/mC942RiXT3WuKrsX5kqwzYkUOq7hN3PuiT
    SSXFoI5LjAgyx5jTHnl7forVJxNl6/tG6xX8Kpq2jROqYK9TXWaHYCVKPy9LQ1nz
    A4ocMt2FAsDF8kfLR0a+iwIDAQABo1MwUTAdBgNVHQ4EFgQUL+X3z4JZHCfH8G/4
    Itd8TuFydEwwHwYDVR0jBBgwFoAUL+X3z4JZHCfH8G/4Itd8TuFydEwwDwYDVR0T
    AQH/BAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAMkLPtvFhfVq3EbmBESwDGOYw
    Ub1XKEJoRDTeyvZ3jfRXhST9fvc4l/F1eNwVJfxWoJbR7BSBfmDb74yjpNpeXKlY
    VUZq52luvyp49E4rNCRGL3s/CcRqgWKjVlJefFjH6MO3a6g3CEdIF5rRf/3EqFH6
    fomRFt0L976hvjFEqr2UXGMrNMK1Czarkx8ZQsdzL2Haa35zz/ZPmOtP5kgsaILz
    hH/BCmy7n6CjCVlwQvEFaT9uQD6VkmzySfCohhj8XV0jpLkghxmvpbQw1CZl/p2R
    DpI8wh+MVHFs3/g3JkIjRE4IRmWdNXNaY0p1UYPEH3+7l9C9vSCd79x/I6m9Pw==
    -----END CERTIFICATE-----
kind: ConfigMap
metadata:
  creationTimestamp: null
  name: k8dash
  namespace: kube-system
---
apiVersion: v1
data:
  id: a3ViZXJuZXRlcw==
  secret: ZXhhbXBsZS1hcHAtc2VjcmV0
  url: aHR0cHM6Ly9kZXguZXhhbXBsZS5jb206MzIwMDA=
kind: Secret
metadata:
  creationTimestamp: null
  name: k8dash
  namespace: kube-system

Do you have any idea?

Feature Request: Add support for Replication Controllers in Workloads view

Hi, great job on the dashboard, its much lighter and faster than the standard k8s dashboard :)

But i am wondering if its possible to add support for viewing and managing Replication Controller type of resource, now when i click on Owned by link and if its replicationcontroller/default/... i get PAGE NOT FOUND

Looking forward for an answer.
Keep up the great job :)

Feature Request: Key-Value Secret Editor

image

Openshift Console has a baller key-value secret editor UI.

It'd be nice if this feature can be adapted for k8dash as well.

To preview this feature in your kubernetes cluster: deploy the Openshift Console

apiVersion: apps/v1
kind: Deployment
metadata:
  name: origin-console
  namespace: kube-system
  labels:
    app: origin-console
spec:
  replicas: 1
  selector:
    matchLabels:
      app: origin-console
  template:
    metadata:
      labels:
        app: origin-console
    spec:
      containers:
        - name: origin-console-container
          image: quay.io/openshift/origin-console:4.6.0
          env:
            - name: BRIDGE_USER_AUTH
              value: disabled
            - name: BRIDGE_K8S_MODE
              value: off-cluster
            - name: BRIDGE_K8S_MODE_OFF_CLUSTER_ENDPOINT
              value: https://kubernetes.default
            - name: BRIDGE_K8S_MODE_OFF_CLUSTER_SKIP_VERIFY_TLS
              value: "true"
            - name: BRIDGE_K8S_AUTH
              value: bearer-token
            - name: BRIDGE_K8S_AUTH_BEARER_TOKEN
              valueFrom:
                secretKeyRef:
                  name: admin-sa-token-abc123 # change this to your cluster token secret
                  key: token

---

kind: Service
apiVersion: v1
metadata:
  name: origin-console-svc
  namespace: kube-system
spec:
  selector:
    app: origin-console
  ports:
  - name: http
    port: 80
    targetPort: 9000

Ensure k8dash scheduled on linux nodes

In a mixed mode os cluster, ex windows & linux, we should ensure that k8dash is only scheduled on the linux nodes. This is due to the fact that the container image is for linux only.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.