Code Monkey home page Code Monkey logo

hawtio-online's Introduction

Hawtio Online

Test

An Hawtio console that eases the discovery and management of hawtio-enabled applications deployed on OpenShift and Kubernetes.

Hawtio Online overview

Hawtio-enabled application examples

A hawtio-enabled application is an application that is composed of containers with a configured port named jolokia and that exposes the Jolokia API.

Look at the separate examples project for understanding how you can set up a hawtio-enabled application for Hawtio Online.

Preparation

Prior to the deployment, depending on the cluster types you need to generate either of the proxying or serving certificates.

Certificate Description
Proxying Used to secure the communication between Hawtio Online and the Jolokia agents. A client certificate is generated and mounted into the Hawtio Online pod with a secret, to be used for TLS client authentication.
Serving Used to secure the communication between the client and Hawtio Online.

OpenShift

Proxying certificates

For OpenShift, a client certificate must be generated using the service signing certificate authority private key.

Run the following script to generate and set up a client certificate for Hawtio Online:

./scripts/generate-proxying.sh

or if you have Yarn installed, this will also do the same thing:

yarn gen:proxying

Serving certificates

For OpenShift, a serving certificate is automatically generated for your Hawtio Online deployment using the service signing certificate feature.

Kubernetes

Proxying certificates

For Kubernetes, proxying certificates are disabled by default and you don't need to go through the steps.

Warning

This means that client certificate authentication between Hawtio Online and the Jolokia agents is not available by default for Kubernetes, and the Jolokia agents need to disable client certificate authentication so that Hawtio Online can connect to them. You can still use TLS for securing the communication between them.

It is possible to use a proxying client certificate for Hawtio Online on Kubernetes; it requires you to generate or provide a custom CA for the certificate and then mount/configure it into the Jolokia agent for its client certificate authentication.

Serving certificates

For Kubernetes, a serving certificate must be generated manually. Run the following script to generate and set up a certificate for Hawtio Online:

./scripts/generate-serving.sh [-k tls.key] [-c tls.crt] [SECRET_NAME] [CN]

or:

yarn gen:serving [-k tls.key] [-c tls.crt] [SECRET_NAME] [CN]

You can provide an existing TLS key and certificate by passing parameters -k tls.key and -c tls.crt respectively. Otherwise, a self-signed tls.key and tls.crt will be generated automatically in the working directory and used for creating the serving certificate secret.

You can optionally pass SECRET_NAME and CN to customise the secret name and Common Name used in the TLS certificate. The default secret name is hawtio-online-tls-serving and CN is hawtio-online.hawtio.svc.

Manual steps

Instead of running the scripts you can choose to perform everything manually.

For manual steps, see Generating Certificates Manually.

Deployment

Now you can run the following instructions to deploy the Hawtio Online console on your OpenShift/Kubernetes cluster.

There are two deployment modes you can choose from: cluster and namespace.

Deployment Mode Description
Cluster The Hawtio Online console can discover and connect to hawtio-enabled 1 applications deployed across multiple namespaces / projects.
OpenShift: Use an OAuth client that requires the cluster-admin role to be created. By default, this requires the generation of a client certificate, signed with the service signing certificate authority, prior to the deployment. See the Preparation - OpenShift section for more information.
Namespace This restricts the Hawtio Online console access to a single namespace / project, and as such acts as a single tenant deployment.
OpenShift: Use a service account as OAuth client, which only requires admin role in a project to be created. By default, this requires the generation of a client certificate, signed with the service signing certificate authority, prior to the deployment. See the Preparation - OpenShift section for more information.

1. Containers with a configured port named jolokia and that exposes the Jolokia API.

OpenShift

You may want to read how to get started with the CLI for more information about the oc client tool.

To deploy the Hawtio Online console on OpenShift, follow the steps below.

Cluster mode

If you have Yarn installed:

yarn deploy:openshift:cluster

otherwise (two commands):

oc apply -k deploy/openshift/cluster/
./deploy/openshift/cluster/oauthclient.sh

Namespace mode

If you have Yarn installed:

yarn deploy:openshift:namespace

otherwise:

oc apply -k deploy/openshift/namespace/

You can obtain the status of your deployment, by running:

$ oc status
In project hawtio on server https://192.168.64.12:8443

https://hawtio-online-hawtio.192.168.64.12.nip.io (reencrypt) (svc/hawtio-online)
  deployment/hawtio-online deploys hawtio/online:latest
    deployment #1 deployed 2 minutes ago - 1 pod

Open the route URL displayed above from your Web browser to access the Hawtio Online console.

Kubernetes

You may want to read how to get started with the CLI for more information about the kubectl client tool.

To deploy the Hawtio Online console on Kubernetes, follow the steps below.

Cluster mode

If you have Yarn installed:

yarn deploy:k8s:cluster

otherwise:

kubectl apply -k deploy/k8s/cluster/

Namespace mode

If you have Yarn installed:

yarn deploy:k8s:namespace

otherwise:

kubectl apply -k deploy/k8s/namespace/

Authentication

Hawtio Online currently supports two authentication modes: oauth and form, which is configured through HAWTIO_ONLINE_AUTH environment variable on Deployment.

Mode Description
oauth Authenticates requests through OpenShift OAuth server. It is available only on OpenShift.
form Authenticates requests with bearer tokens throught the Hawtio login form.

Creating user for Form authentication

With the Form authentication mode, any user with a bearer token can be authenticated. See Authenticating for different ways to provide users with bearer tokens.

Here we illustrate how to create a ServiceAccount as a user to log in to the Hawtio console as an example. See Creating a Hawtio user for Form authentication for more details.

RBAC

See RBAC.

Development

Tools

You must have the following tools installed:

  • Node.js (version 18 or higher)
  • Yarn (version 3.6.0 or higher)

Build

yarn install

Install

In order to authenticate and obtain OAuth access tokens for the Hawtio console be authorized to watch for hawtio-enabled 1 applications deployed in your cluster, you have to create an OAuth client that matches localhost development URLs.

Cluster mode

oc create -f oauthclient.yml

See OAuth Clients for more information.

Namespace mode

oc create -f serviceaccount.yml

See Service Accounts as OAuth Clients for more information.

Run

Cluster mode

yarn start --master=`oc whoami --show-server` --mode=cluster

Namespace mode

yarn start --master=`oc whoami --show-server` --mode=namespace --namespace=`oc project -q`

You can access the console at http://localhost:2772/.

Disable Jolokia authentication for deployments (dev only)

In order for a local hawtio-online to detect the hawtio-enabled applications, each application container needs to be configured with the following environment variables:

AB_JOLOKIA_AUTH_OPENSHIFT=false
AB_JOLOKIA_PASSWORD_RANDOM=false
AB_JOLOKIA_OPTS=useSslClientAuthentication=false,protocol=https

The following script lets you apply the above environment variables to all the deployments with a label provider=fabric8 in a batch:

./scripts/disable-jolokia-auth.sh

hawtio-online's People

Contributors

abkieling avatar astefanutti avatar cunningt avatar dependabot[bot] avatar djcoleman avatar johnpoth avatar mmelko avatar mmuzikar avatar phantomjinx avatar prahaladhchandrahasan avatar scuilion avatar tadayosi avatar valdar avatar vinzent avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hawtio-online's Issues

Namespace selector feature

I'd like to be able to configure a namespace selector, so that only pods from namespaces with matching labels are displayed.

Our OpenShift cluster spans over multiple networkzones and network traffic between the network zones is disallowed (this includes SDN traffic). Altough I might have rights for pods in DEV and PROD the Hawtio-online in DEV only ever has access to DEV pods and the PROD hawtio-online will never have access to DEV. Having pods listed that are not accessible is a usability issue and also causes some hickups in the browser. For example Chrome waits long time showing "Waiting on available socket ..." when opening the console of a pod ("Connect") , probably until some of the blocked connections time out.

I'm trying create a PR for this. I managed to locate where the selector needs to be injected: https://github.com/vinzent/hawtio-online/commit/0085f7130785b9b7a11a5a5c2fcb80f46cb83510 . Now I would appreciate some pointers on where I could put the configuration. I tried to use _.get(window, "hawtconfig.namespaceSelector", {}) (https://github.com/vinzent/hawtio-online/commit/26241b13f3b175b14ea077b02b5b5aa6931498f5) - but hawtconfig isn't available in that context.

Relevant PR's for this issue:

Support different Jolokia ports & paths

If I'm not mistaken, currently hawtio-online hardcodes the Jolokia port to 8778 and assumes it's available at /jolokia. However, for instance Artemis broker exposes the Jolokia endpoint with the following port and the path /console/jolokia by default:

      ports:
        - containerPort: 8161
          name: console-jolokia
          protocol: TCP

The main focus here is to recognise Artemis brokers on OpenShift but it would be also flexible that hawtio-online can pick up arbitrary port numbers and Jolokia paths given they are specified in the application deployment spec.

Login should not be prompted when getting 401 in the Integration view

Sometimes an idle Integration view gets 401 unauthorized errors in the background and thus shows us the login prompt like the following. However, in the context of hawtio-online giving an username/password in the prompt shouldn't resolve the error. We should rather just close the view and reconnect to the pod from the Online view again. Thus it should be better that the prompt simply says we get an unauthorized error in the background and close the view with a single OK button, asking the user to reconnect to the pod again.

Screenshot from 2020-11-18 14-01-21

Broken Links in Readme

The following are broken links:

  • get started with the CLI
  • service-signing-certificate
  • OAuth Clients

Support user-provided SSO/identity providers

This might be a duplicate of #31 but the idea is to enable integration with SSO/identity providers the user has brought on a k8s cluster for their own apps. I think why currently it's hard to support vanilla k8s #23 for hawtio-online is its tight integration with OpenShift's OAuth mechanism and thus its being part of the OpenShift infra components. Unlike other monitoring components such as Prometheus and Grafana, hawtio-online should ideally be part of user applications and thus share the same authentication mechanism with them.

Hawtio online with Istio and strict mTLS - solution worth documenting.

We have Hawtio-online running in OKD4 with istio sidecar injected so it can access the rest of the mesh which uses strict mTLS.
Services discovery work well and it discovered all the services.
However we cannot connect to any Jolokia and we can see lots of errors in logs like below:

upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.6, server: localhost, request: "POST /management/namespaces/X/pods/https:X-0:8443/jolokia/ HTTP/1.1", subrequest: "/proxy/https:10.1.0.211:8443/jolokia/", upstream: "https://10.1.0.211:8443/jolokia/", host: "hawtio.X", referrer: "https://hawtio.X/online/online/discover"

From above I am judging that Jolokia is trying to access pod using pod IP which doesn't work out of the box in our case.

Solution to the problem is creating headless service, which then allows accessing individual pods by ip.
It might be worth documenting above so others don't have to waste time.

Also worth considering changing the hawtio operator to actually automatically create headless services if some parameter is set to true in operator config.

Thanks.

DeploymentConfig image reference triggers unnecessary deployments

I want to run oc apply -f deployment-cluster-os4.yml multiple times. But a second run will trigger 2 additional deployments of the DeplyomentConfig.

Root cause

The ImageChange trigger injects the image reference into .spec.template.spec.containers[].image .

If you run oc apply -f deployment-cluster-os4.yml a second time, the image reference is reset to hawtio/online triggering the 2nd deployment. That will be canceled immediately due to the ImageChange trigger replacing the image reference again - triggering a 3rd deployment.

Solution

If one wants to run oc apply -f deployment-cluster-os4.yml in a declarative way, you need to remove the DeploymentConfig .spec.template.spec.containers[].image=hawtio/online key/value.

Environment

OpenShift 3.11 (probably same issue also on OpenShift 4, but not tested), Fuse 7.6

Proxying apiserver request allows eaves dropping of client bearer tokens

Reading nginx-gateway.conf I understand that apiserver communication is routed this way: user -> browser -> hawtio-online nginx proxy -> kube-apiserver

Proxying of apiserver requests enables a deployer of hawtio-online (or Fuse Console 7.x) to eaves drop any apiserver bearer token.

You just need some user with more privileges than you (think of: cluster-admin) to login to your hawtio-online console. If you prepared to log the header you now are able to do requests as that user.

Why is it technically required to proxy the apiserver requests?

Why can't the browser app connect directly to the apiserver instead?

Hawtio online : Internal Server error

I have a number of services running on a Kubernetes cluster. One of them have camel routes which I need to monitor. I am using hawtio online for it. But whenever I restart my service pod, Hawtio shows internal server error. I need to redeploy hawtio online to get it working again. Do anyone know the reason and solution for this issue? Also where is the default location of log files?

hawtio console don't have FormAuth plugin.

hi, I deploy hawtio console as K8S. But i can't login with K8S serviceaccount.

####Here is my operate:

./scripts/generate-serving.sh
kubectl apply -k deploy/k8s/namespace

####My K8S resources already generate
image

The problem :
image

I can't to logout or login, please help me ...

Release v1.13.0

It's been more than a year since the last release. Let's cut another release. It will include new features including Kubernetes support.

Improve UX for Kubernetes support

Follow-up on #23.

From #88 (review):

The major area I see that remains is smoothing the user experience, in terms of configuration and documentation.

For the configuration, I can think of:

  • Having a Kustomize configuration for vanilla Kubernetes
  • Having an extra overlay for OpenShift (and maybe removing the existing resources at the project root)
  • Deciding how to handle HTTPS serving or not:

For the documentation, I can think of:

  • Updating the README to reflect that PR work
  • Adding a section on how to create a ServiceAccount and retrieve its token to be used in the form login

Already done:

  • Having a Kustomize configuration for vanilla Kubernetes
  • Having an extra overlay for OpenShift

To do:

  • Removing the existing resources at the project root
  • Deciding how to handle HTTPS serving or not
  • Updating the README to reflect that PR work
  • Adding a section on how to create a ServiceAccount and retrieve its token to be used in the form login

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.