Code Monkey home page Code Monkey logo

console's Introduction

OpenShift Console

Codename: "Bridge"

quay.io/openshift/origin-console

The console is a more friendly kubectl in the form of a single page webapp. It also integrates with other services like monitoring, chargeback, and OLM. Some things that go on behind the scenes include:

  • Proxying the Kubernetes API under /api/kubernetes
  • Providing additional non-Kubernetes APIs for interacting with the cluster
  • Serving all frontend static assets
  • User Authentication

Quickstart

Dependencies:

  1. node.js >= 18 & yarn >= 1.20
  2. go >= 1.18+
  3. oc or kubectl and an OpenShift or Kubernetes cluster
  4. jq (for contrib/environment.sh)
  5. Google Chrome/Chromium or Firefox for integration tests

Build everything:

This project uses Go modules, so you should clone the project outside of your GOPATH. To build both the frontend and backend, run:

./build.sh

Backend binaries are output to ./bin.

Configure the application

The following instructions assume you have an existing cluster you can connect to. OpenShift 4.x clusters can be installed using the OpenShift Installer. More information about installing OpenShift can be found at https://try.openshift.com/. You can also use CodeReady Containers for local installs, or native Kubernetes clusters.

OpenShift (no authentication)

For local development, you can disable OAuth and run bridge with an OpenShift user's access token. If you've installed OpenShift 4.0, run the following commands to login as the kubeadmin user and start a local console for development. Make sure to replace /path/to/install-dir with the directory you used to install OpenShift.

oc login -u kubeadmin -p $(cat /path/to/install-dir/auth/kubeadmin-password)
source ./contrib/oc-environment.sh
./bin/bridge

The console will be running at localhost:9000.

If you don't have kubeadmin access, you can use any user's API token, although you will be limited to that user's access and might not be able to run the full integration test suite.

OpenShift (with authentication)

If you need to work on the backend code for authentication or you need to test different users, you can set up authentication in your development environment. Registering an OpenShift OAuth client requires administrative privileges for the entire cluster, not just a local project. You must be logged in as a cluster admin such as system:admin or kubeadmin.

To run bridge locally connected to an OpenShift cluster, create an OAuthClient resource with a generated secret and read that secret:

oc process -f examples/console-oauth-client.yaml | oc apply -f -
oc get oauthclient console-oauth-client -o jsonpath='{.secret}' > examples/console-client-secret

If the CA bundle of the OpenShift API server is unavailable, fetch the CA certificates from a service account secret. Otherwise copy the CA bundle to examples/ca.crt:

oc get secrets -n default --field-selector type=kubernetes.io/service-account-token -o json | \
    jq '.items[0].data."ca.crt"' -r | python -m base64 -d > examples/ca.crt
# Note: use "openssl base64" because the "base64" tool is different between mac and linux

Finally run the console and visit localhost:9000:

./examples/run-bridge.sh

Enabling Monitoring Locally

In order to enable the monitoring UI and see the "Observe" navigation item while running locally, you'll need to run the OpenShift Monitoring dynamic plugin alongside Bridge. To do so, follow these steps:

  1. Clone the monitoring-plugin repo: https://github.com/openshift/monitoring-plugin
  2. cd to the monitoring-plugin root dir
  3. Run
yarn && yarn start
  1. Run Bridge in another terminal following the steps above, but set the following environment variable before starting Bridge:
export BRIDGE_PLUGINS="monitoring-plugin=http://localhost:9001"

Updating tectonic-console-builder image

Updating tectonic-console-builder image is needed whenever there is a change in the build-time dependencies and/or go versions.

In order to update the tectonic-console-builder to a new version i.e. v27, follow these steps:

  1. Update the tectonic-console-builder image tag in files listed below:
    • .ci-operator.yaml
    • Dockerfile.dev
    • Dockerfile.plugins.demo For example, tectonic-console-builder:27
  2. Update the dependencies in Dockerfile.builder file i.e. v18.0.0.
  3. Run ./push-builder.sh script build and push the updated builder image to quay.io. Note: You can test the image using ./builder-run.sh ./build-backend.sh. To update the image on quay.io, you need edit permission to the quay.io/coreos/ tectonic-console-builder repo.
  4. Lastly, update the mapping of tectonic-console-builder image tag in [openshift/release](https:// github.com/openshift/release/blob/master/core-services/image-mirroring/supplemental-ci-images/mapping_supplemental_ci_images_ci) repository. Note: There could be scenario were you would have to add the new image reference in the "mapping_supplemental_ci_images_ci" file, i.e. to avoid CI downtime for upcoming release cycle. Optional: Request for the rhel-8-base-nodejs-openshift-4.15 nodebuilder update if it doesn't match the node version in tectonic-console-builder.

CodeReady Containers

If you want to use CodeReady for local development, first make sure it is set up, and the OpenShift cluster is started.

To login to the cluster's API server, you can use the following command:

oc login -u kubeadmin -p $(cat ~/.crc/machines/crc/kubeadmin-password) https://api.crc.testing:6443

… or, alternatively, use the CRC daemon UI (Copy OC Login Command --> kubeadmin) to get the cluster-specific command.

Finally, prepare the environment, and run the console:

source ./contrib/environment.sh
./bin/bridge

Native Kubernetes

If you have a working kubectl on your path, you can run the application with:

export KUBECONFIG=/path/to/kubeconfig
source ./contrib/environment.sh
./bin/bridge

The script in contrib/environment.sh sets sensible defaults in the environment, and uses kubectl to query your cluster for endpoint and authentication information.

To configure the application to run by hand, (or if environment.sh doesn't work for some reason) you can manually provide a Kubernetes bearer token with the following steps.

First get the secret ID that has a type of kubernetes.io/service-account-token by running:

kubectl get secrets

then get the secret contents:

kubectl describe secrets/<secret-id-obtained-previously>

Use this token value to set the BRIDGE_K8S_AUTH_BEARER_TOKEN environment variable when running Bridge.

Operator

In OpenShift 4.x, the console is installed and managed by the console operator.

Hacking

See CONTRIBUTING for workflow & convention details.

See STYLEGUIDE for file format and coding style guide.

Dev Dependencies

go 1.18+, nodejs/yarn, kubectl

Frontend Development

All frontend code lives in the frontend/ directory. The frontend uses node, yarn, and webpack to compile dependencies into self contained bundles which are loaded dynamically at run time in the browser. These bundles are not committed to git. Tasks are defined in package.json in the scripts section and are aliased to yarn run <cmd> (in the frontend directory).

Install Dependencies

To install the build tools and dependencies:

cd frontend
yarn install

You must run this command once, and every time the dependencies change. node_modules are not committed to git.

Interactive Development

The following build task will watch the source code for changes and compile automatically. If you would like to disable hot reloading, set the environment variable HOT_RELOAD to false.

yarn run dev

If changes aren't detected, you might need to increase fs.inotify.max_user_watches. See https://webpack.js.org/configuration/watch/#not-enough-watchers. If you need to increase your watchers, it's common to see multiple errors beginning with Error from chokidar.

Unit Tests

Run all unit tests:

./test.sh

Run backend tests:

./test-backend.sh

Run frontend tests:

./test-frontend.sh

Debugging Unit Tests

  1. cd frontend; yarn run build
  2. Add debugger; statements to any unit test
  3. yarn debug-test route-pages
  4. Chrome browser URL: 'chrome://inspect/#devices', click on the 'inspect' link in Target (v10...) section.
  5. Launches chrome-dev tools, click Resume button to continue
  6. Will break on any debugger; statements

Integration Tests

Cypress integration tests are implemented in Cypress.io.

To install Cypress:

cd frontend
yarn run cypress install

Launch Cypress test runner:

cd frontend
oc login ...
yarn run test-cypress-console

This will launch the Cypress Test Runner UI in the console package, where you can run one or all Cypress tests.

Important: when testing with authentication, set BRIDGE_KUBEADMIN_PASSWORD environment variable in your shell.

Execute Cypress in different packages

An alternate way to execute cypress tests is via test-cypress.sh which takes a -p <package> parameter to allow execution in different packages. It also can run Cypress tests in the Test Runner UI or in -- headless mode:

console>./test-cypress.sh
Runs Cypress tests in Test Runner or headless mode
Usage: test-cypress [-p] <package> [-s] <filemask> [-h true]
  '-p <package>' may be 'console, 'olm' or 'devconsole'
  '-s <specmask>' is a file mask for spec test files, such as 'tests/monitoring/*'. Used only in headless mode when '-p' is specified.
  '-h true' runs Cypress in headless mode. When omitted, launches Cypress Test Runner
Examples:
  test-cypress.sh                                       // displays this help text
  test-cypress.sh -p console                            // opens Cypress Test Runner for console tests
  test-cypress.sh -p olm                                // opens Cypress Test Runner for OLM tests
  test-cypress.sh -h true                               // runs all packages in headless mode
  test-cypress.sh -p olm -h true                        // runs OLM tests in headless mode
  test-cypress.sh -p console -s 'tests/crud/*' -h true  // runs console CRUD tests in headless mode

When running in headless mode, Cypress will test using its integrated Electron browser, but if you want to use Chrome or Firefox instead, set BRIDGE_E2E_BROWSER_NAME environment variable in your shell with the value chrome or firefox.

More information on Console's Cypress usage

More information on DevConsole's Cypress usage

How the Integration Tests Run in CI

The end-to-end tests run against pull requests using ci-operator. The tests are defined in this manifest in the openshift/release repo and were generated with ci-operator-prowgen.

CI runs the test-prow-e2e.sh script, which runs test-cypress.sh.

test-cypress.sh runs all Cypress tests, in all 'packages' (console, olm, and devconsole), in -- headless mode via:

test-cypress.sh -h true

For more information on test-cypress.sh usage please see Execute Cypress in different packages

Internationalization

See INTERNATIONALIZATION for information on our internationalization tools and guidelines.

Deploying a Custom Image to an OpenShift Cluster

Once you have made changes locally, these instructions will allow you to push changes to an OpenShift cluster for others to review. This involves building a local image, pushing the image to an image registry, then updating the OpenShift cluster to pull the new image.

Prerequisites

  1. Docker v17.05 or higher for multi-stage builds
  2. An image registry like quay.io or Docker Hub

Steps

  1. Create a repository in the image registry of your choice to hold the image.
  2. Build Image docker build -t <your-image-name> <path-to-repository | url>. For example:
docker build -t quay.io/myaccount/console:latest .
  1. Push image to image registry docker push <your-image-name>. Make sure docker is logged into your image registry! For example:
docker push quay.io/myaccount/console:latest
  1. Put the console operator in unmanaged state:
oc patch consoles.operator.openshift.io cluster --patch '{ "spec": { "managementState": "Unmanaged" } }' --type=merge
  1. Update the console Deployment with the new image:
oc set image deploy console console=quay.io/myaccount/console:latest -n openshift-console
  1. Wait for the changes to rollout:
oc rollout status -w deploy/console -n openshift-console

You should now be able to see your development changes on the remote OpenShift cluster!

When done, you can put the console operator back in a managed state to remove the custom image:

oc patch consoles.operator.openshift.io cluster --patch '{ "spec": { "managementState": "Managed" } }' --type=merge

Dependency Management

Dependencies should be pinned to an exact semver, sha, or git tag (eg, no ^).

Backend

Whenever making vendor changes:

  1. Finish updating dependencies & writing changes
  2. Commit everything except vendor/ (eg, server: add x feature)
  3. Make a second commit with only vendor/ (eg, vendor: revendor)

Adding new or updating existing backend dependencies:

  1. Edit the go.mod file to the desired version (most likely a git hash)
  2. Run go mod tidy && go mod vendor
  3. Verify update was successful. go.sum will have been updated to reflect the changes to go.mod and the package will have been updated in vendor.

Frontend

Add new frontend dependencies:

yarn add <package@version>

Update existing frontend dependencies:

yarn upgrade <package@version>

To upgrade yarn itself, download a new yarn release from https://github.com/yarnpkg/yarn/releases, replace the release in frontend/.yarn/releases with the new version, and update yarn-path in frontend/.yarnrc.

@patternfly

Note that when upgrading @patternfly packages, we've seen in the past that it can cause the JavaScript heap to run out of memory, or the bundle being too large if multiple versions of the same @patternfly package is pulled in. To increase efficiency, run the following after updating packages:

npx yarn-deduplicate --scopes @patternfly

Supported Browsers

We support the latest versions of the following browsers:

  • Edge
  • Chrome
  • Safari
  • Firefox

IE 11 and earlier is not supported.

Frontend Packages

console's People

Contributors

alecmerdler avatar amrutac avatar bipuladh avatar christianvogt avatar debsmita1 avatar dtaylor113 avatar ggreer avatar invinciblejai avatar jcaianirh avatar jeff-phillips-18 avatar jerolimov avatar jhadvig avatar kans avatar kyoto avatar mareklibra avatar openshift-ci[bot] avatar openshift-merge-bot[bot] avatar openshift-merge-robot avatar rawagner avatar rebeccaalpert avatar rhamilto avatar rohitkrai03 avatar sahil143 avatar sg00dwin avatar sman591 avatar spadgett avatar suomiy avatar therealjon avatar vikram-raj avatar yaacov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

console's Issues

Chrome crashes with out of memory exception

We have the OpenShift Cluster with the OpenShift Console deployed. We have multiple namespaces and one of the namespace has around ~ 170 pods.

When logged in, it's terrifically slow and hangs without any response. After a while the chrome window goes to the screen with Aw snap, something went wrong while displaying this webpage message.

If I have the Developer Tools opened, it automatically puts a break point and have a message as Paused before potential out-of-memory crash as shown below.

And the memory tab shows around 2Gi as shown in the next attachments.

screen shot 2018-11-27 at 4 11 56 pm

screen shot 2018-11-27 at 4 37 08 pm

Also attached the logs from the DevTools console logs.
developertools.log

We can't display/access the Web Terminal of a container

Info

Openshift Console: master branch
Cloud platform : k8s - 1.13

Description

Issue : No terminal console is displayed as you can see here when we try to access a pod' terminal

screenshot 2018-12-18 09 51 00

Console has been launched using these commands

2018/12/18 09:46:06 cmd/main: cookies are not secure because base-address is not https!
2018/12/18 09:46:06 cmd/main: running with AUTHENTICATION DISABLED!
2018/12/18 09:46:06 cmd/main: Binding to 0.0.0.0:9000...
2018/12/18 09:46:06 cmd/main: not using TLS
2018/12/18 09:46:07 CheckOrigin: Proxy has no configured Origin. Allowing origin [http://localhost:9000] to ws://192.168.65.42:8080/api/v1/namespaces/my-spring-app/pods?watch=true&fieldSelector=metadata.name%3Dfruit-backend-sb-66ff974cd-2lwjl
2018/12/18 09:46:08 CheckOrigin: Proxy has no configured Origin. Allowing origin [http://localhost:9000] to ws://192.168.65.42:8080/api/v1/namespaces/component-operator/pods?watch=true&fieldSelector=metadata.name%3Dcomponent-operator-59c97d5cb7-h8c8z
2018/12/18 09:46:13 http: proxy error: context canceled
2018/12/18 09:46:14 CheckOrigin: Proxy has no configured Origin. Allowing origin [http://localhost:9000] to ws://192.168.65.42:8080/api/v1/namespaces?watch=true&resourceVersion=22021
2
...

I can access remotely a pod using the kubectl command

 kubectl exec -it fruit-backend-sb-66ff974cd-2lwjl -n my-spring-app -- /bin/bash
/usr/bin/id: cannot find name for user ID 1001
[I have no name!@fruit-backend-sb-66ff974cd-2lwjl ~]$ ls -la
total 28
drwxrwx--- 4 1001 root 4096 Dec 18 08:11 .
drwxr-xr-x 6 root root 4096 Dec 18 08:11 ..
-rw------- 1 1001 root   16 Dec 18 08:11 .bash_history
-rw-rw-r-- 1 1001 root   18 Sep 26  2017 .bash_logout
-rw-rw-r-- 1 1001 root  193 Sep 26  2017 .bash_profile
-rw-rw-r-- 1 1001 root  231 Sep 26  2017 .bashrc
drwxrwxr-x 2 1001 root 4096 Jul 16 09:30 .m2

Add tooltip to the "copy to clipboard" component used with Secrets, ConfigMaps, Mobile, etc

Currently when a user clicks the copy icon, there is no user feedback. Can we add a tooltip so the regular state says "Copy to Clipboard" and once clicked, the tooltip changes to "Copied"?

Note: this was discussed in PR #50 originally, but there was an issue with the tooltip getting cut off:
https://user-images.githubusercontent.com/1668218/40371608-289552ec-5de3-11e8-8080-4b2dcfa8aa8a.png

The mobile team is looking to use the same component, so we would like to get a tooltip implemented! cc @openshift/team-ux-review @spadgett

Builds should show their pod metrics if they can

It's frustrating not seeing them. However, note that for builds the prometheus query is for pod_name="X",container_name="" (we have to show the full pod CPU because the build is run under the pod cgroup, not a container cgroup).

Blank page displayed - http://localhost:9000/overview/ns/odo

When I use the console top of kubernetes v1.11 and click on overview button of a namespace, I'm getting this error within the Browser console and the screen stays white

index.tsx:57 destroying websocket: /api/kubernetes/apis/apps/v1/namespaces/odo/statefulsets?watch=true&resourceVersion=10213
index.tsx:57 statefulsets---{"ns":"odo"} timed out - restarting polling
react-dom.production.min.js:13 TypeError: Cannot read property 'data' of undefined
    at t.createDeploymentConfigItems (index.tsx:59)
    at t.createOverviewData (index.tsx:59)
    at t.componentDidUpdate (index.tsx:59)
    at commitLifeCycles (react-dom.production.min.js:13)
    at w (react-dom.production.min.js:13)
    at S (react-dom.production.min.js:13)
    at x (react-dom.production.min.js:13)
    at b (react-dom.production.min.js:13)
    at y (react-dom.production.min.js:13)
    at p (react-dom.production.min.js:13)

screenshot 2018-11-08 08 46 33

Using browser Find will find Catalog Items not being displayed

The patternfly react CatalogTileViewCategory component implementation caused the tiles to be rendered but not visible. This meant the browser Find was still able to find those items. Fix for the CatalogTileViewCategory is in patternfly-react-extensions version 2.9.1

To fix this issue the catalog view needs to pass CatalogTile components to the CatalogTileViewCategory component which now work as links if an href or onClick are passed.

Project display name and description unused

Unique names for projects are used in all places and display names only appear in the details, which seems to be the opposite of their intended purpose. Additionally, project descriptions don't appear anywhere that I've been able to find. I wonder if we should emphasize these fields more, or remove them if they are not used.

Swapped position of actionMenu and resource-icon

Often after creating a first instance of a specific resource(Service, Secret, Stateful Set, ... ) the position of the actionMenu and resource-icon is swapped. Even after the navigating to some other page and back to the resource List page the order is still swapped. Only after refreshing the page the order goes back to normal.
Swapped position:
1

Normal position:
2

Also the size of the resource-icon is incorrect since it's bigger the the intended size.

Show chained builds in Overview

Right now when using a chained build will only show the last build step as part of the Deployment Config, in our case where the first step can take several minutes it seems as though nothing is happening for a long time. You can of course check the Builds menu but it would be nice if it were possible to have this directly visible in the Overview.

ImageStreamTag display won't work quite like expected

Image stream overview displays a list of tags, which are clickable. However, that takes me to image stream tags view. On that view, I don't see historical tags (you need the image stream for that). But another problem is that you can have spec or status tags, and both need to be viewable (i.e. the following states)

  1. You only have a status tag on the image stream called foo (can have history, image stream tag will return something)
  2. You only have a spec tag on the image stream called foo (no history, image stream tag api returns 404)
  3. You have both spec and status tag called foo (can have history, can have spec, and need to show image metadata).

At some point we will release a v2 image stream tag api that corrects the gaps in the current v1 image stream tag. Until then, you'll need to do the following:

  1. Fetch image stream
  2. Fetch image stream image if the latest status tag has an image definition
  3. Show the history from status, the spec (if it exists), and the image metadata from image stream image.

test cluster console api

I want to test the okd3.11 cluster console api, but I don't know which one the header uses, it seems that the token is not used?
image

auth: link OAuth2 state parameter with user agent

(NOTE: we have someone internally who's been assigned this)

Currently we generate a "state" redirect parameter because some providers reject requests without them. However we don't actually link the state parameter to the user's session, leaving us vulnerable to certain cross-site request forgery attacks.

console/auth/auth.go

Lines 232 to 238 in 9c80cb3

// TODO(ericchiang): actually start using the "state" parameter correctly
var randData [4]byte
if _, err := io.ReadFull(rand.Reader, randData[:]); err != nil {
panic(err)
}
state := hex.EncodeToString(randData[:])
http.Redirect(w, r, a.oauth2Client.AuthCodeURL(state), http.StatusSeeOther)

https://tools.ietf.org/html/rfc6749#section-4.1.1

Cookie the user with something that can link back to the "state" parameter we pass, per the recommendations in RFC 6819.

https://tools.ietf.org/html/rfc6819#section-5.3.5

5.3.5. Link the "state" Parameter to User Agent Session

The "state" parameter is used to link client requests and prevent
CSRF attacks, for example, attacks against the redirect URI. An
attacker could inject their own authorization "code" or access token,
which can result in the client using an access token associated with
the attacker's protected resources rather than the victim's (e.g.,
save the victim's bank account information to a protected resource
controlled by the attacker).

The client should utilize the "state" request parameter to send the
authorization server a value that binds the request to the user
agent's authenticated state (e.g., a hash of the session cookie used
to authenticate the user agent) when making an authorization request.
Once authorization has been obtained from the end user, the
authorization server redirects the end-user's user agent back to the
client with the required binding value contained in the "state"
parameter.

The binding value enables the client to verify the validity of the
request by matching the binding value to the user agent's

Additional margin under nav section heading

There's some additional margin between the section title and the items, for instance, between "Workloads" and "Pods below. (Current is on the left, previous on the right.)

Just checking if this is intentional or an oversight.

overview okd 2018-10-26 17-33-22

cc @openshift/team-ux-review

Can't create a new namespace when creating an app

  1. Go to Home - Catalog
  2. Pick Python
  3. Click 'Create Application'

Namespace selector doesn't allow to create a new namespace/project, so I have to go back, create one manually and create an app again

Inconsistent capitalization of "YAML" throughout console

On details pages:
screen shot 2018-09-07 at 4 15 57 pm

In secrets dropdown:
screen shot 2018-09-07 at 3 02 52 pm

In Application console actions dropdown:
screen shot 2018-09-07 at 4 41 14 pm

In template suggestions on creation pages:
screen shot 2018-09-07 at 4 15 45 pm

So far this last one is the only use of "yaml" that I've found, but there may be others hidden in the console. Was there a specific reason for this styling, or is this an error?

Console catalog display duplicated service

Environment info

# oc version
oc v3.9.0+ba7faec-1
kubernetes v1.9.1+a0ce1bc657
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://openshift-1:8443
openshift v3.9.0+ba7faec-1
kubernetes v1.9.1+a0ce1bc657

Problem description

In openshift namespace, there are two same service: OpenJDK 8
image

The first one is not right

image

The other one is right:

image

Additional info

openjdk template source code is here

PodInitializing is not generally an error state, messaging is confusing

screenshot

In this case the pod has failed

status:
  phase: Failed
  conditions:
    - type: Initialized
      status: 'False'
      lastProbeTime: null
      lastTransitionTime: '2018-06-08T02:35:19Z'
      reason: ContainersNotInitialized
      message: 'containers with incomplete status: [initupload place-tools]'
    - type: Ready
      status: 'False'
      lastProbeTime: null
      lastTransitionTime: '2018-06-08T02:35:19Z'
      reason: ContainersNotReady
      message: 'containers with unready status: [test sidecar]'
    - type: PodScheduled
      status: 'True'
      lastProbeTime: null
      lastTransitionTime: '2018-06-08T02:35:19Z'
  hostIP: 10.142.0.6
  podIP: 172.16.14.101
  startTime: '2018-06-08T02:35:19Z'
  initContainerStatuses:
    - name: clonerefs
      state:
        terminated:
          exitCode: 0
          reason: Completed
          startedAt: '2018-06-08T02:35:21Z'
          finishedAt: '2018-06-08T02:35:25Z'
          containerID: >-
            docker://69859fd7510e3a6bd5e8d2b619c31847ac1acd7f2dafeb5eb9b777ef4206bb50
      lastState: {}
      ready: true
      restartCount: 0
      image: 'registry.svc.ci.openshift.org/ci/clonerefs:latest-json'
      imageID: >-
        docker-pullable://registry.svc.ci.openshift.org/ci/clonerefs@sha256:27418befcbf1561f32d97eb47c94d8ffebb4747c19cb8e9ab43f8d594165e0f3
      containerID: >-
        docker://69859fd7510e3a6bd5e8d2b619c31847ac1acd7f2dafeb5eb9b777ef4206bb50
    - name: initupload
      state:
        terminated:
          exitCode: 1
          reason: Error
          startedAt: '2018-06-08T02:35:26Z'
          finishedAt: '2018-06-08T02:35:27Z'
          containerID: >-
            docker://ab37a33572b4fd62793ac3e983e6b560ab9656c4e31287447d2ac7b18fcdbf5d
      lastState: {}
      ready: false
      restartCount: 0
      image: 'registry.svc.ci.openshift.org/ci/initupload:latest-json'
      imageID: >-
        docker-pullable://registry.svc.ci.openshift.org/ci/initupload@sha256:c249cc0438dbd12923b1b2190968f6c3fcb8a06dcfb2060b52ade27924863f21
      containerID: >-
        docker://ab37a33572b4fd62793ac3e983e6b560ab9656c4e31287447d2ac7b18fcdbf5d
    - name: place-tools
      state:
        waiting:
          reason: PodInitializing
      lastState: {}
      ready: false
      restartCount: 0
      image: 'registry.svc.ci.openshift.org/ci/entrypoint:latest-json'
      imageID: ''
  containerStatuses:
    - name: sidecar
      state:
        waiting:
          reason: PodInitializing
      lastState: {}
      ready: false
      restartCount: 0
      image: 'registry.svc.ci.openshift.org/ci/sidecar:latest-json'
      imageID: ''
    - name: test
      state:
        waiting:
          reason: PodInitializing
      lastState: {}
      ready: false
      restartCount: 0
      image: >-
        docker-registry.default.svc:5000/ci/ci-operator@sha256:a3ba73101306cf64bb84f0d2270fd3f31186a97c56bc568bcda52f185eaf645a
      imageID: ''
  qosClass: BestEffort

But showing PodInitializing here is a bit confusing since it's usually a forward action (the pod is initializing). I think "InitContainerFailed" or "InitializationFailed" would be more appropriate.

Nav item label and icon inconsistencies between the two consoles

In flipping between the two consoles, I noticed we use a couple of the same primary nav labels to mean different things in the two different consoles and, in the case of "Applications", the icons differ.

@spadgett and I briefly discussed these issues this morning, and would like feedback from @openshift/team-ux-review and anyone interested on how to best resolve.

Overview

  • means project overview in origin-web-console (owc)
    screen shot 2018-07-17 at 9 55 31 am
  • means cluster overview in console
    screen shot 2018-07-17 at 9 55 43 am

Presumably we'd want to keep "Overview" to mean project overview for the forthcoming project overview in console? What do we change "Overview" in console to? What icon goes with the new label?

Applications

  • means workloads minus builds plus networking (services and routes) in owc, uses cubes icon
    screen shot 2018-07-17 at 9 56 04 am
  • means things installed via open cloud services and related nav items in console, and uses a layers icon that closely resembles pficon-build
    screen shot 2018-07-17 at 9 56 19 am

@spadgett and I couldn't come up with a suggestion for the label change we liked. But presumably we don't want to use the layers icon in console at all since it so closely resembles pficon-build? What do we use instead?

Add new drag and drop handle icon for reordering environment variables

I'm working with Mary Shakshober (UXD Visual Design Intern) to get some drag and drop handle icons created and put into PatternFly for use in reordering environment variables, here: PR #88

Looking to have the icons designed by 6/15, and put into PatternFly during PF's next sprint! Will update here when those icons are available to use.

Annotations on ImageStream overview are hard to view

The annotations link on ImageStream overview pops up a modal dialog that only shows 1 row and 30 columns for both name and value, which isn't enough to really convey annotations. It needs to be possible to view annotations easily, and those annotations can be up to hundreds of characters. A modal dialog really doesn't allow that to happen.

Error "Required value" for field "metadata.namespace"

After apply kubevirt-web-ui [1] to ocp cluster, it could be able to create virtual machine in http://kubevirt-web-ui-kweb-ui.cloudapps.example.com. However, create a virtual machine from a typical yaml [2] file is failed because of such an error Error "Required value" for field "metadata.namespace".

[1] https://github.com/kubevirt/web-ui
[2] https://raw.githubusercontent.com/kubevirt/kubevirt/master/cluster/examples/vm-cirros.yaml

Additional notes:
By adding the namespace to the VM yaml, then create it works.

Misaligned panel in Dropdown component

The placement of the Dropdown component's menu is misaligned. This is cause by improper element stack in the Dropdown's underlying components and will require their re-ordering.
Screens:
1
2

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.