Code Monkey home page Code Monkey logo

stackrox / stackrox Goto Github PK

View Code? Open in Web Editor NEW
1.1K 27.0 133.0 197.61 MB

The StackRox Kubernetes Security Platform performs a risk analysis of the container environment, delivers visibility and runtime alerts, and provides recommendations to proactively improve security by hardening the environment.

License: Apache License 2.0

Shell 3.01% Smarty 0.52% Makefile 0.34% Go 65.30% Dockerfile 0.16% Python 0.42% Groovy 4.37% Java 0.01% JavaScript 9.32% HTML 0.01% TypeScript 16.28% CSS 0.23% Tcl 0.04% XSLT 0.01% C 0.01%
containers hacktoberfest k8s kubernetes security

stackrox's Introduction

Table of Contents


StackRox Kubernetes Security Platform

The StackRox Kubernetes Security Platform performs a risk analysis of the container environment, delivers visibility and runtime alerts, and provides recommendations to proactively improve security by hardening the environment. StackRox integrates with every stage of container lifecycle: build, deploy and runtime.

The StackRox Kubernetes Security platform is built on the foundation of the product formerly known as Prevent, which itself was called Mitigate and Apollo. You may find references to these previous names in code or documentation.


Community

You can reach out to us through Slack (#stackrox). For alternative ways, stop by our Community Hub stackrox.io.

For event updates, blogs and other resources follow the StackRox community site at stackrox.io.

For the StackRox Code of Conduct.

To report a vulnerability or bug.


Deploying StackRox

Quick Installation using Helm

StackRox offers quick installation via Helm Charts. Follow the Helm Installation Guide to get helm CLI on your system. Then run the helm quick installation script or proceed to section Manual Installation using Helm for configuration options.

Install StackRox via Helm Installation Script
/bin/bash <(curl -fsSL https://raw.githubusercontent.com/stackrox/stackrox/master/scripts/quick-helm-install.sh)

A default deployment of StackRox has certain CPU and memory requests and may fail on small (e.g. development) clusters if sufficient resources are not available. You may use the --small command-line option in order to install StackRox on smaller clusters with limited resources. Using this option is not recommended for production deployments.

/bin/bash <(curl -fsSL https://raw.githubusercontent.com/stackrox/stackrox/master/scripts/quick-helm-install.sh) --small

The script adds the StackRox helm repository, generates an admin password, installs stackrox-central-services, creates an init bundle for provisioning stackrox-secured-cluster-services, and finally installs stackrox-secured-cluster-services on the same cluster.

Finally, the script will automatically open the browser and log you into StackRox. A certificate warning may be displayed since the certificate is self-signed. See the Accessing the StackRox User Interface (UI) section to read more about the warnings. After authenticating you can access the dashboard using https://localhost:8000/main/dashboard.

Manual Installation using Helm

StackRox offers quick installation via Helm Charts. Follow the Helm Installation Guide to get the helm CLI on your system.

Deploying using Helm consists of 4 steps

  1. Add the StackRox repository to Helm
  2. Launch StackRox Central Services using helm
  3. Create a cluster configuration and a service identity (init bundle)
  4. Deploy the StackRox Secured Cluster Services using that configuration and those credentials (this step can be done multiple times to add more clusters to the StackRox Central Service)
Install StackRox Central Services

Default Central Installation

First, the StackRox Central Services will be added to your Kubernetes cluster. This includes the UI and Scanner. To start, add the stackrox/helm-charts/opensource repository to Helm.

helm repo add stackrox https://raw.githubusercontent.com/stackrox/helm-charts/main/opensource/

To see all available Helm charts in the repo run (you may add the option --devel to show non-release builds as well)

helm search repo stackrox

To install stackrox-central-services, you will need a secure password. This password will be needed later for UI login and when creating an init bundle.

STACKROX_ADMIN_PASSWORD="$(openssl rand -base64 20 | tr -d '/=+')"

From here, you can install stackrox-central-services to get Central and Scanner components deployed on your cluster. Note that you need only one deployed instance of stackrox-central-services even if you plan to secure multiple clusters.

helm upgrade --install -n stackrox --create-namespace stackrox-central-services \
  stackrox/stackrox-central-services \
  --set central.adminPassword.value="${STACKROX_ADMIN_PASSWORD}"

Install Central in Clusters With Limited Resources

If you're deploying StackRox on nodes with limited resources such as a local development cluster, run the following command to reduce StackRox resource requirements. Keep in mind that these reduced resource settings are not suited for a production setup.

helm upgrade -n stackrox stackrox-central-services stackrox/stackrox-central-services \
  --set central.resources.requests.memory=1Gi \
  --set central.resources.requests.cpu=1 \
  --set central.resources.limits.memory=4Gi \
  --set central.resources.limits.cpu=1 \
  --set central.db.resources.requests.memory=1Gi \
  --set central.db.resources.requests.cpu=500m \
  --set central.db.resources.limits.memory=4Gi \
  --set central.db.resources.limits.cpu=1 \
  --set scanner.autoscaling.disable=true \
  --set scanner.replicas=1 \
  --set scanner.resources.requests.memory=500Mi \
  --set scanner.resources.requests.cpu=500m \
  --set scanner.resources.limits.memory=2500Mi \
  --set scanner.resources.limits.cpu=2000m
Install StackRox Secured Cluster Services

Default Secured Cluster Installation

Next, the secured cluster component will need to be deployed to collect information on from the Kubernetes nodes.

Generate an init bundle containing initialization secrets. The init bundle will be saved in stackrox-init-bundle.yaml, and you will use it to provision secured clusters as shown below.

kubectl -n stackrox exec deploy/central -- roxctl --insecure-skip-tls-verify \
  --password "${STACKROX_ADMIN_PASSWORD}" \
  central init-bundles generate stackrox-init-bundle --output - > stackrox-init-bundle.yaml

Set a meaningful cluster name for your secured cluster in the CLUSTER_NAME shell variable. The cluster will be identified by this name in the clusters list of the StackRox UI.

CLUSTER_NAME="my-secured-cluster"

Then install stackrox-secured-cluster-services (with the init bundle you generated earlier) using this command:

helm upgrade --install --create-namespace -n stackrox stackrox-secured-cluster-services stackrox/stackrox-secured-cluster-services \
  -f simon-test-cluster-init-bundle.yaml \
  --set clusterName="$CLUSTER_NAME" \
  --set centralEndpoint="central.stackrox.svc:443"

When deploying stackrox-secured-cluster-services on a different cluster than the one where stackrox-central-services is deployed, you will also need to specify the endpoint (address and port number) of Central via --set centralEndpoint=<endpoint_of_central_service> command-line argument.

Install Secured Cluster with Limited Resources

When deploying StackRox Secured Cluster Services on a small node, you can install with additional options. This should reduce stackrox-secured-cluster-services resource requirements. Keep in mind that these reduced resource settings are not recommended for a production setup.

helm install -n stackrox stackrox-secured-cluster-services stackrox/stackrox-secured-cluster-services \
  -f stackrox-init-bundle.yaml \
  --set clusterName="$CLUSTER_NAME" \
  --set centralEndpoint="central.stackrox.svc:443" \
  --set sensor.resources.requests.memory=500Mi \
  --set sensor.resources.requests.cpu=500m \
  --set sensor.resources.limits.memory=500Mi \
  --set sensor.resources.limits.cpu=500m
Additional information about Helm charts

To further customize your Helm installation consult these documents:

Installation via Scripts

The deploy script will:

  1. Launch StackRox Central Services
  2. Create a cluster configuration and a service identity
  3. Deploy the StackRox Secured Cluster Services using that configuration and those credentials

You can set the environment variable MAIN_IMAGE_TAG in your shell to ensure that you get the version you want.

If you check out a commit, the scripts will launch the image corresponding to that commit by default. The image will be pulled if needed.

Further steps are orchestrator specific.

Kubernetes Distributions (EKS, AKS, GKE)

Click to expand

Follow the guide below to quickly deploy a specific version of StackRox to your Kubernetes cluster in the stackrox namespace. If you want to install a specific version, make sure to define/set it in MAIN_IMAGE_TAG, otherwise it will install the latest nightly build.

Run the following in your working directory of choice:

git clone [email protected]:stackrox/stackrox.git
cd stackrox
MAIN_IMAGE_TAG=VERSION_TO_USE ./deploy/deploy.sh

After a few minutes, all resources should be deployed.

Credentials for the 'admin' user can be found in the ./deploy/k8s/central-deploy/password file.

Note: While the password file is stored in plaintext on your local filesystem, the Kubernetes Secret StackRox uses is encrypted, and you will not be able to alter the secret at runtime. If you lose the password, you will have to redeploy central.

OpenShift

Click to Expand

Before deploying on OpenShift, ensure that you have the oc - OpenShift Command Line installed.

Follow the guide below to quickly deploy a specific version of StackRox to your OpenShift cluster in the stackrox namespace. Make sure to add the most recent tag to the MAIN_IMAGE_TAG variable.

Run the following in your working directory of choice:

git clone [email protected]:stackrox/stackrox.git
cd stackrox
MAIN_IMAGE_TAG=VERSION_TO_USE ./deploy/deploy.sh

After a few minutes, all resources should be deployed. The process will complete with this message.

Credentials for the 'admin' user can be found in the ./deploy/openshift/central-deploy/password file.

Note: While the password file is stored in plaintext on your local filesystem, the Kubernetes Secret StackRox uses is encrypted, and you will not be able to alter the secret at runtime. If you loose the password, you will have to redeploy central.

Docker Desktop, Colima, or minikube

Click to Expand

Run the following in your working directory of choice:

git clone [email protected]:stackrox/stackrox.git
cd stackrox
MAIN_IMAGE_TAG=latest ./deploy/deploy-local.sh

After a few minutes, all resources should be deployed.

Credentials for the 'admin' user can be found in the ./deploy/k8s/deploy-local/password file.

Accessing the StackRox User Interface (UI)

Click to expand

After the deployment has completed (Helm or script install) a port-forward should exist, so you can connect to https://localhost:8000/. Run the following

kubectl port-forward -n 'stackrox' svc/central "8000:443"

Then go to https://localhost:8000/ in your web browser.

Username = The default user is admin

Password (Helm) = The password is in $STACKROX_ADMIN_PASSWORD after a manual installation, or printed at the end of the quick installation script.

Password (Script) = The password will be located in the /deploy/<orchestrator>/central-deploy/password.txt folder for the script install.


Development

Quickstart

Build Tooling

The following tools are necessary to test code and build image(s):

Click to expand
  • Make
  • Go
  • Various Go linters that can be installed using make reinstall-dev-tools.
  • UI build tooling as specified in ui/README.md.
  • Docker
    • Note: Docker Desktop now requires a paid subscription for larger, enterprise companies.
    • Some StackRox devs recommend Colima
  • Xcode command line tools (macOS only)
  • Bats is used to run certain shell tests. You can obtain it with brew install bats or npm install -g bats.
  • oc OpenShift cli tool
  • shellcheck for shell scripts linting.

Xcode - macOS Only

Usually you would have these already installed by brew. However, if you get an error when building the golang x/tools, try first making sure the EULA is agreed by:

  1. starting Xcode
  2. building a new blank app project
  3. starting the blank project app in the emulator
  4. close both the emulator and the Xcode, then
  5. run the following commands:
xcode-select --install
sudo xcode-select --switch /Library/Developer/CommandLineTools # Enable command line tools
sudo xcode-select -s /Applications/Xcode.app/Contents/Developer

For more info, see nodejs/node-gyp#569

Clone StackRox

Click to expand
# Create a GOPATH: this is the location of your Go "workspace".
# (Note that it is not – and must not – be the same as the path Go is installed to.)
# The default is to have it in ~/go/, or ~/development, but anything you prefer goes.
# Whatever you decide, create the directory, set GOPATH, and update PATH:
export GOPATH=$HOME/go # Change this if you choose to use a different workspace.
export PATH=$PATH:$GOPATH/bin
# You probably want to permanently set these by adding the following commands to your shell
# configuration (e.g. ~/.bash_profile)

cd $GOPATH
mkdir -p bin pkg
mkdir -p src/github.com/stackrox
cd src/github.com/stackrox
git clone [email protected]:stackrox/stackrox.git

Local Development

Click to expand

To sweeten your experience, install the workflow scripts beforehand.

$ cd $GOPATH/src/github.com/stackrox/stackrox
$ make install-dev-tools
$ make image

Now, you need to bring up a Kubernetes cluster yourself before proceeding. Development can either happen in GCP or locally with Docker Desktop, Colima, minikube. Note that Docker Desktop and Colima are more suited for macOS development, because the cluster will have access to images built with make image locally without additional configuration. Also, Collector has better support for these than minikube where drivers may not be available.

# To keep the StackRox Central's Postgres DB state between database upgrades and restarts, set:
$ export STORAGE=pvc

# To save time on rebuilds by skipping UI builds, set:
$ export SKIP_UI_BUILD=1

# To save time on rebuilds by skipping CLI builds, set:
$ export SKIP_CLI_BUILD=1

# When you deploy locally make sure your kube context points to the desired kubernetes cluster,
# for example Docker Desktop.
# To check the current context you can call a workflow script:
$ roxkubectx

# To deploy locally, call:
$ ./deploy/deploy-local.sh

# Now you can access StackRox dashboard at https://localhost:8000
# or simply call another workflow script:
$ logmein

See Installation via Scripts for further reading. To read more about the environment variables, consult deploy/README.md.

Common Makefile Targets

Click to expand
# Build image, this will create `stackrox/main` with a tag defined by `make tag`.
$ make image

# Compile all binaries
$ make main-build-dockerized

# Displays the docker image tag which would be generated
$ make tag

# Note: there are integration tests in some components, and we currently
# run those manually. They will be re-enabled at some point.
$ make test

# Apply and check style standards in Go and JavaScript
$ make style

# enable pre-commit hooks for style checks
$ make init-githooks

# Compile and restart only central
$ make fast-central

# Compile only sensor
$ make fast-sensor

# Only compile protobuf
$ make proto-generated-srcs

Productivity

Click to expand

The workflow repository contains some helper scripts which support our development workflow. Explore more commands with roxhelp --list-all.

# Change directory to rox root
$ cdrox

# Handy curl shortcut for your StackRox central instance
# Uses https://localhost:8000 by default or ROX_BASE_URL env variable
# Also uses the admin credentials from your last deployment via deploy.sh
$ roxcurl /v1/metadata

# Run quickstyle checks, faster than stackrox's "make style"
$ quickstyle

# The workflow repository includes some tools for supporting
# working with multiple inter-dependent branches.
# Examples:
$ smart-branch <branch-name>    # create new branch
    ... work on branch...
$ smart-rebase                  # rebase from parent branch
    ... continue working on branch...
$ smart-diff                    # check diff relative to parent branch
    ... git push, etc.

GoLand Configuration

Click to expand

If you're using GoLand for development, the following can help improve the experience.

Make sure the Protocol Buffers plugin is installed. The plugin comes installed by default in GoLand. If it isn't, use Help | Find Action..., type Plugins and hit enter, then switch to Marketplace, type its name and install the plugin.

This plugin does not know where to look for .proto imports by default in GoLand therefore you need to explicitly configure paths for this plugin. See https://github.com/jvolkman/intellij-protobuf-editor#path-settings.

  • Go to GoLand | Preferences | Languages & Frameworks | Protocol Buffers.
  • Uncheck Configure automatically.
  • Click on + button, navigate and select ./proto directory in the root of the repo.
  • Optionally, also add $HOME/go/pkg/mod/github.com/gogo/[email protected] and $HOME/go/pkg/mod/github.com/gogo/[email protected]/.
  • To verify: use menu Navigate | File... type any .proto file name, e.g. alert_service.proto, and check that all import strings are shown green, not red.

Running sql_integration tests

Click to expand

Go tests annotated with //go:build sql_integration require a PostgreSQL server listening on port 5432. Due to how authentication is set up in code, it is the easiest to start Postgres in a container like this:

$ docker run --rm --env POSTGRES_USER="$USER" --env POSTGRES_HOST_AUTH_METHOD=trust --publish 5432:5432 docker.io/library/postgres:13

With that running in the background, sql_integration tests can be triggered from IDE or command-line.

Debugging

Click to expand

Kubernetes debugger setup

With GoLand, you can naturally use breakpoints and the debugger when running unit tests in IDE.

If you would like to debug local or even remote deployment, follow the procedure below.

  1. Create debug build locally by exporting DEBUG_BUILD=yes:
    $ DEBUG_BUILD=yes make image
    Alternatively, debug build will also be created when the branch name contains -debug substring. This works locally with make image and in CI.
  2. Deploy the image using instructions from this README file. Works both with deploy-local.sh and deploy.sh.
  3. Start the debugger (and port forwarding) in the target pod using roxdebug command from workflow repo.
    # For Central
    $ roxdebug
    # For Sensor
    $ roxdebug deploy/sensor
    # See usage help
    $ roxdebug --help
  4. Configure GoLand for remote debugging (should be done only once):
    1. Open Run | Edit Configurations …, click on the + icon to add new configuration, choose Go Remote template.
    2. Choose Host: localhost and Port: 40000. Give this configuration some name.
    3. Select On disconnect: Leave it running (this prevents GoLand forgetting breakpoints on reconnect).
  5. Attach GoLand to debugging port: select Run | Debug… and choose configuration you've created. If all done right, you should see Connected message in the Debug | Debugger | Variables window at the lower part of the screen.
  6. Set some code breakpoints, trigger corresponding actions and happy debugging!

See Debugging go code running in Kubernetes for more info.

Generating Portable Installers

Kubernetes
docker run -i --rm quay.io/stackrox-io/main:<tag> central generate interactive > k8s.zip

This will run you through an installer and generate a k8s.zip file.

unzip k8s.zip -d k8s
bash k8s/central.sh

Now Central has been deployed. Use the UI to deploy Sensor.

OpenShift

Note: If using a host mount, you need to allow the container to access it by using sudo chcon -Rt svirt_sandbox_file_t <full volume path>

Take the image-setup.sh script from this repo and run it to do the pull/push to local OpenShift registry. This is a prerequisite for every new cluster.

bash image-setup.sh
docker run -i --rm quay.io/stackrox-io/main:<tag> central generate interactive > openshift.zip

This will run you through an installer and generate a openshift.zip file.

unzip openshift.zip -d openshift
bash openshift/central.sh

Dependencies and Recommendations for Running StackRox

Click to Expand

The following information has been gathered to help with the installation and operation of the open source StackRox project. These recommendations were developed for the Red Hat Advanced Cluster Security for Kubernetes product and have not been tested with the upstream StackRox project.

Recommended Kubernetes Distributions

The Kubernetes Platforms that StackRox has been deployed onto with minimal issues are listed below.

  • Red Hat OpenShift Dedicated (OSD)
  • Azure Red Hat OpenShift (ARO)
  • Red Hat OpenShift Service on AWS (ROSA)
  • Amazon Elastic Kubernetes Service (EKS)
  • Google Kubernetes Engine (GKE)
  • Microsoft Azure Kubernetes Service (AKS)

If you deploy into a Kubernetes distribution other than the ones listed above you may encounter issues.

Recommended Operating Systems

StackRox is known to work on the recent versions of the following operating systems.

  • Ubuntu
  • Debian
  • Red Hat Enterprise Linux (RHEL)
  • CentOS
  • Fedora CoreOS
  • Flatcar Container Linux
  • Google COS
  • Amazon Linux
  • Garden Linux

Recommended Web Browsers

The following table lists the browsers that can view the StackRox web user interface.

  • Google Chrome 88.0 (64-bit)
  • Microsoft Internet Explorer Edge
    • Version 44 and later (Windows)
    • Version 81 (Official build) (64-bit)
  • Safari on MacOS (Mojave) - Version 14.0
  • Mozilla Firefox Version 82.0.2 (64-bit)

stackrox's People

Contributors

0x656b694d avatar alwayshooin avatar c-du avatar clickboo avatar connorgorman avatar dashrews78 avatar dependabot[bot] avatar derbergkert avatar dhaus67 avatar dvail avatar fredrb avatar gavin-stackrox avatar ivan-degtiarenko avatar janisz avatar josephaltmaier avatar lvalerom avatar md2119 avatar misberner avatar msugakov avatar pedrottimark avatar porridge avatar rhybrillou avatar rtann avatar sachaudh avatar sh-sr avatar simonbaeumer avatar stehessel avatar theencee avatar viswajithiii avatar vjwilson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

stackrox's Issues

Duplicate Deployments in Dropdown in Policy Exclusion Scope UI

https://cloud-native.slack.com/archives/C01TDE3GK0E/p1660665385988579

When adding a policy exclusion scope, the dropdown for Deployment is unordered and has duplicates; additionally, there's no way to type to search the list or to enter a deployment that doesn't currently exist (might be ephemeral or an expected deployment you want to apply policy to before it's deployed).

It appears that this is taking the entire output of the DeploymentsService API endpoint (/v1/deployments) and populating the dropdown with that list. Recommendation to improve this experience:

  1. Apply UNIQUE to the array of deployments. (Matching in the policy is done by policy name and not by something like GUID [as done for clusters] so this should have no practical impact on functionality.)
  2. Sort in alphabetical order.
  3. If a cluster is selected in the scope, filter the list by deployments in that cluster rather than listing all deployments.
  4. If a namespace is selected, filter the list by the namespace selected.
  5. Use a field that allows for typing to search the list and/or enter a value that doesn't currently exist instead of the current drop-down style

Manage stackrox internal resources through operator

Today the operator solves the basic issue of setting up stackrox it self which is great.
But as I think most of us can agree on we need more :).

I would like to configure most resources that exist inside stackrox through CRD:s.
This will enable me to configure things like access to container registries using gitops instead of manually having to go in and klick in the UI.

This will also help a lot when having big environments with multiple clusters.

I know this is a big feature request and it's probably better to split up into smaller issues but at least I wanted to start the discussion about this functionality.

`roxctl central generate interactive` falsely expects a registry

Following the defined process for a manual central install will result in a setup.sh script that prompts for docker credentials even though opensource images are used.
Ideally, roxctl should not prompt for registry credentials when it is used to generate an installer for the opensource flavor.
A good starting point for the investigation could be https://github.com/stackrox/stackrox/blob/master/roxctl/central/generate/interactive.go

$ roxctl central generate interactive
Enter path to the backup bundle from which to restore keys and certificates (optional):
Enter PEM cert bundle file (optional):
Enter Create PodSecurityPolicy resources (for pre-v1.25 Kubernetes) (default: "true"):
Enter administrator password (default: autogenerated):
Enter orchestrator (k8s, openshift): k8s
Enter the directory to output the deployment bundle to (default: "central-bundle"):
Enter default container images settings (stackrox.io, rhacs, opensource); it controls repositories from where to download the images, image names and tags format (default: "rhacs"): opensource
Enter the method of exposing Central (lb, np, none) (default: "none"):
Enter main image to use(if unset, a default will be used according to --image-defaults) (default: "quay.io/stackrox-io/main:3.71.0"):
Enter whether to run StackRox in offline mode, which avoids reaching out to the Internet (default: "false"):
Enter whether to enable telemetry (default: "true"):
Enter the deployment tool to use (kubectl, helm, helm-values) (default: "kubectl"):
Enter Istio version when deploying into an Istio-enabled cluster (leave empty when not running Istio) (optional):
Enter scanner-db image to use(if unset, a default will be used according to --image-defaults) (default: "quay.io/stackrox-io/scanner-db:3.71.0"):
Enter scanner image to use(if unset, a default will be used according to --image-defaults) (default: "quay.io/stackrox-io/scanner:3.71.0"):
Enter Central volume type (hostpath, pvc): pvc
Enter external volume name (default: "stackrox-db"):
Enter external volume size in Gi (default: "100"):
Enter storage class name (optional if you have a default StorageClass configured):
INFO:   Generating deployment bundle...
INFO:   Deployment bundle includes PodSecurityPolicies (PSPs). This is incompatible with Kubernetes >= v1.25.
INFO:   Use --enable-pod-security-policies=false to disable PodSecurityPolicies.
INFO:   For the time being PodSecurityPolicies remain enabled by default in deployment bundles and need to be disabled explicitly for Kubernetes >= v1.25.
INFO:   Unless run in offline mode,
 StackRox Kubernetes Security Platform collects and transmits aggregated usage and system health information.
  If you want to OPT OUT from this, re-generate the deployment bundle with the '--enable-telemetry=false' flag
INFO:   Done!
INFO:   Wrote central bundle to "central-bundle"
To deploy:
  - If you need to add additional trusted CAs, run central/scripts/ca-setup.sh.
  - Deploy Central
    - Run central/scripts/setup.sh
    - Run kubectl create -R -f central

  - Deploy Scanner
     If you want to run the StackRox Scanner:
     - Run scanner/scripts/setup.sh
     - Run kubectl create -R -f scannerPLEASE NOTE: The recommended way to deploy StackRox is by using Helm. If you have
Helm 3.1+ installed, please consider choosing this deployment route instead. For your
convenience, all required files have been written to the helm/ subdirectory, along with
a README file detailing the Helm-based deployment process.For administrator login, select the "Login with username/password" option on
the login page, and log in with username "admin" and the password found in the
"password" file located in the same directory as this README.

This is tracked internally as ROX-12328

Installation in arbitrary namespace

I am currently facing an issue installing the stack in a namespace which is not "stackrox". Due to policy enforcement we need to create namespaces with a certain prefix, "mcs-". Due to that we need to create the central in this namespace as well.

ROX_NAMESPACE=mcs-stackrox ROX_CENTRAL_ENDPOINT="central.mcs-stackrox.svc:443" ROX_ADVERTISED_ENDPOINT="sensor.mcs-stackrox.svc:443" ROX_SENSOR_ENDPOINT="sensor.mcs-stackrox.svc:443" ROX_SCANNER_GRPC_ENDPOINT="scanner.mcs-stackrox.svc:8443" ./roxctl central generate interactive

Enter path to the backup bundle from which to restore keys and certificates (optional):
Enter PEM cert bundle file (optional): 
Enter administrator password (default: autogenerated):
Enter orchestrator (k8s, openshift): openshift
Enter the directory to output the deployment bundle to (default: "central-bundle"):
Enter the OpenShift major version (3 or 4) to deploy on (default: "0"): 4
Enter Istio version when deploying into an Istio-enabled cluster (leave empty when not running Istio) (optional):
Enter the method of exposing Central (route, lb, np, none) (default: "none"): route 
Enter main image to use (default: "stackrox.io/main:3.0.61.1"):
Enter whether to run StackRox in offline mode, which avoids reaching out to the Internet (default: "false"):
Enter whether to enable telemetry (default: "true"):
Enter the deployment tool to use (kubectl, helm, helm-values) (default: "kubectl"):
Enter Scanner DB image to use (default: "stackrox.io/scanner-db:2.15.2"):
Enter Scanner image to use (default: "stackrox.io/scanner:2.15.2"):
Enter Central volume type (hostpath, pvc): pvc 
Enter external volume name (default: "stackrox-db"):
Enter external volume size in Gi (default: "100"):
Enter storage class name (optional if you have a default StorageClass configured):

However, the manifests all have "stackrox" in the metadata.namespace field:

balpert@omega:~/rox-debug$ grep -r "namespace: " central-bundle/central/
central-bundle/central/01-central-10-networkpolicy.yaml:  namespace: stackrox
central-bundle/central/01-central-10-networkpolicy.yaml:    meta.helm.sh/release-namespace: stackrox
central-bundle/central/01-central-13-service.yaml:  namespace: stackrox
central-bundle/central/01-central-13-service.yaml:    meta.helm.sh/release-namespace: stackrox
central-bundle/central/01-central-14-exposure.yaml:  namespace: stackrox
central-bundle/central/01-central-14-exposure.yaml:    meta.helm.sh/release-namespace: stackrox
central-bundle/central/01-central-14-exposure.yaml:  namespace: stackrox
central-bundle/central/01-central-14-exposure.yaml:    meta.helm.sh/release-namespace: stackrox
central-bundle/central/01-central-02-security.yaml:    meta.helm.sh/release-namespace: stackrox
central-bundle/central/01-central-02-security.yaml:  namespace: stackrox
central-bundle/central/01-central-02-security.yaml:    meta.helm.sh/release-namespace: stackrox
central-bundle/central/01-central-02-security.yaml:    namespace: stackrox
central-bundle/central/01-central-02-security.yaml:    meta.helm.sh/release-namespace: stackrox
central-bundle/central/01-central-02-security.yaml:    meta.helm.sh/release-namespace: stackrox
....

With the advertised deployment in README (oc create -R -f central) all resources are located in the wrong namespace. Also, when inspecting the certificates for the service, the SANs are showing stackrox again:

openssl x509 -in cert.pem -text -noout
...
            X509v3 Subject Alternative Name: 
                DNS:central.stackrox, DNS:central.stackrox.svc
...

Is there a way to deploy the stackrox central into an arbitrary namespace?

Question about passwords hashing into RocksDB

Hello !

I would like to know which algorithm is used to store users passwords (i.e. admin) into the RocksDB.

It seems you need to use bcrypt when using the current helm chart and force admin password but password stored can use another hashing algorithm.

Thank you in advance

~question

Enable gosec rules

Currently we have only one (1) gosec rule enabled in golangci-lint config.

stackrox/.golangci.yml

Lines 51 to 53 in 334e6b7

gosec:
includes:
- G601

Ideally we should enable all of them. Every PR should fix one rule. There is a chance that some rules are already fixed and we only need to enable them. After including new rule, please ensure make golangci-lint is passing if there are errors please fix them.

  • G101: Look for hard coded credentials #3566
  • G102: Bind to all interfaces #3567
  • G103: Audit the use of unsafe block #3568
  • G104: Audit errors not checked #3936
  • G106: Audit the use of ssh.InsecureIgnoreHostKey #3677
  • G107: Url provided to HTTP request as taint input
  • G108: Profiling endpoint automatically exposed on /debug/pprof #3677
  • G109: Potential Integer overflow made by strconv.Atoi result conversion to int16/32 #3677
  • G110: Potential DoS vulnerability via decompression bomb
  • G111: Potential directory traversal #3629
  • G112: Potential slowloris attack
  • G113: Usage of Rat.SetString in math/big with an overflow (CVE-2022-23772) #3631
  • G114: Use of net/http serve function that has no support for setting timeouts
  • G201: SQL query construction using format string #3677
  • G202: SQL query construction using string concatenation #3677
  • G203: Use of unescaped data in HTML templates #3677
  • G204: Audit use of command execution
  • G301: Poor file permissions used when creating a directory
  • G302: Poor file permissions used with chmod
  • G303: Creating tempfile using a predictable path #3560
  • G304: File path provided as taint input
  • G305: File traversal when extracting zip/tar archive
  • G306: Poor file permissions used when writing to a new file
  • G307: Deferring a method which returns an error #3677
  • G401: Detect the usage of DES, RC4, MD5 or SHA1
  • G402: Look for bad TLS connection settings
  • G403: Ensure minimum RSA key length of 2048 bits #3677
  • G404: Insecure random number source (rand)
  • G501: Import blocklist: crypto/md5
  • G502: Import blocklist: crypto/des #3677
  • G503: Import blocklist: crypto/rc4 #3677
  • G504: Import blocklist: net/http/cgi #3677
  • G505: Import blocklist: crypto/sha1
  • G601: Implicit memory aliasing of items from a range statement

how stackrox detect CVE-2020-8561?

Could you please tell me how technically stackrox is looking for the vulnerability CVE-2020-8561?
vulnerability link https://groups.google.com/g/kubernetes-security-announce/c/RV2IhwcrQsY

the fact is that in our cloud provider "yandex cloud" it is technically impossible to successfully redirect according to the description of the vulnerability, but stackrox shows that the vulnerability is valid
so I want to understand how technically you are looking for it? Where can i see it in the code?

Sensor pod crashed

Hello,
I am using Stackrox in k8s approx 1-2 weeks. Today Sensos pod crashed (CrashLoopBackoff). Logs give me the following. Please help

No certificates found in /usr/local/share/ca-certificates
No certificates found in /etc/pki/injected-ca-trust
main: 2022/05/10 11:38:35.261465 main.go:28: Info: Running StackRox Version: 3.69.x-569-g769804636a
kubernetes/sensor: 2022/05/10 11:38:35.265931 sensor.go:73: Info: Loaded Helm cluster configuration with fingerprint "fb12e0e60b6db042d0d52966999774256e7eb3c88aea8bc1af00694346927bae"
kubernetes/sensor: 2022/05/10 11:38:35.296520 sensor.go:91: Info: Determined deployment identification: {
"systemNamespaceId": "3582c397-0ae3-11ea-8402-067ff35c4130",
"defaultNamespaceId": "36b49dc0-0ae3-11ea-8402-067ff35c4130",
"appNamespace": "stackrox",
"appNamespaceId": "21cc6be0-3b8f-45d7-a6d8-a40360aa0db8",
"appServiceaccountId": "fb6f0e86-3634-48f4-9437-f2e8a864496b",

Some question about the product

Hello everyone,

My team is using this product on an Openshift infrastructure and it works great !

But, I have some questions about this :

  • which components or functions create temp files and what is the mean size of created files (is there also an auto-clean function for these files ?)
  • is there a way to have access log (like apache access log) ?
  • (a more Red-hat concern question) is there a way to monitor new release of different images of the product (on Red hat registries) like an RSS feed or mail notifications ?

Thank you,

~question

Vulnerability scan doesn't include dependency libs (OpenSSL)

I was playing around with multiple vulnerability scanning tools in Kubernetes. While doing this, I experienced that Stackrox is not flagging dependency libraries from (OS?) packages.

Example:
We've a pod running based on Alpine alpine:v3.14. During installation the following command has been executed:
apk add --update --no-cache openssl

This will install OpenSSL version:
OpenSSL 1.1.1n 15 Mar 2022 (Library: OpenSSL 1.1.1l 24 Aug 2021)

So the cli package OpenSSL contains the latest patched version but the dependency libraries (libcrypto.so and libssl.so) are one minor version behind.

It looks like Stackrox is only looking at the main library and not the dependencies for determine if there are active vulnerabilities. Where, for example, Sysdig does find vulnerabilities for both OpenSSL 1.1.1n and 1.1.1l.

CVE-2022-0778 is available in 1.1.1l, Stackrox doesn't flag this one, Sysdig does.

More information about (OpenSSL) main vs library versions and why their are not always in line:

Here some proof that libss is actually on a previous version:
apk list | grep libssl libssl1.1-1.1.1l-r0 x86_64 {openssl} (OpenSSL) [installed] libssl1.1-1.1.1n-r0 x86_64 {openssl} (OpenSSL) [upgradable from: libssl1.1-1.1.1l-r0]

Scanner integration trivy

Today there are support for a number of registries and scanners.
Personally I use trivy to scan my images using my CI/CD environment and to be consistent with existing vulnerabilities towards my developers I would like to use trivy as a image scanner.

Trivy can be run in client server mode https://www.youtube.com/watch?v=tNQ-VlahtYM and with a simple api request you can information about CVE:s in your container.

Multi-Arch Image Support

First of all, congratulations on open-sourcing the project. I'm deploying Stackrox at home to play around and get used to the interface, in addition to increasing the security posture of my home-based k3s cluster. Upon installation of the platform, it would appear that there is no way for the collector daemonset to run on my arm64 nodes. From a cursory look on quay.io/stackrox-io it would appear that the images that are being built are not multi-arch images. I realize that other architectures aren't necessarily popular or provide major wins in terms of business value, however for some admins the only way to experience the software first hand is to install it on a handful of Raspberry Pis running in their basement :)

In lieu of the multi arch images, I also looked at the helm chart for secured-cluster-services and also noticed that there was no way to create a nodeselector for the collector daemonset, so there's no real way for me to prevent the collector pods from ImagePullBackOff when they inevitably schedule on my arm64 nodes. (this is more of a workaround for this issue, but could very well be an issue in it's own right for the helm chart repo).

Is there any interest in supporting multi-arch images moving forward? I know shoehorning multi-arch into the build process isn't always the easiest ask in the world.

thanks!

Root web url

is there anyway to change the root prefix for stackrox to /stackrox or something like that?

[Collector] Segmentation Fault on all nodes in OpenShift 4.9.33

Hi,

Disclaimer: I have opened the same issue at stackrox/collector#838 because I am not sure on which repository this should be tracked as here we have a area/collector label. Please close the one which is at the wrong location.

we are experiencing crashes in collector containers across all nodes in one of our OpenShift clusters.

Debug Log:

Collector Version: 3.9.0
OS: Red Hat Enterprise Linux CoreOS 49.84.202205050701-0 (Ootpa)
Kernel Version: 4.18.0-305.45.1.el8_4.x86_64
Starting StackRox Collector...
[I 20220926 112218 HostInfo.cpp:126] Hostname: '<redacted>'
[I 20220926 112218 CollectorConfig.cpp:119] User configured logLevel=debug
[I 20220926 112218 CollectorConfig.cpp:149] User configured collection-method=kernel_module
[I 20220926 112218 CollectorConfig.cpp:206] Afterglow is enabled
[D 20220926 112218 HostInfo.cpp:200] EFI directory exist, UEFI boot mode
[D 20220926 112218 HostInfo.h:100] identified kernel release: '4.18.0-305.45.1.el8_4.x86_64'
[D 20220926 112218 HostInfo.h:101] identified kernel version: '#1 SMP Wed Apr 6 13:48:37 EDT 2022'
[D 20220926 112218 HostInfo.cpp:297] SecureBoot status is 2
[D 20220926 112218 collector.cpp:254] Core dump not enabled
[I 20220926 112218 collector.cpp:302] Module version: 2.0.1
[I 20220926 112218 collector.cpp:329] Attempting to download kernel module - Candidate kernel versions:
[I 20220926 112218 collector.cpp:331] 4.18.0-305.45.1.el8_4.x86_64
[D 20220926 112218 GetKernelObject.cpp:148] Checking for existence of /kernel-modules/collector-4.18.0-305.45.1.el8_4.x86_64.ko.gz and /kernel-modules/collector-4.18.0-305.45.1.el8_4.x86_64.ko
[D 20220926 112218 GetKernelObject.cpp:151] Found existing compressed kernel object.
[I 20220926 112218 collector.cpp:262]
[I 20220926 112218 collector.cpp:263] This product uses kernel module and ebpf subcomponents licensed under the GNU
[I 20220926 112218 collector.cpp:264] GENERAL PURPOSE LICENSE Version 2 outlined in the /kernel-modules/LICENSE file.
[I 20220926 112218 collector.cpp:265] Source code for the kernel module and ebpf subcomponents is available upon
[I 20220926 112218 collector.cpp:266] request by contacting [email protected].
[I 20220926 112218 collector.cpp:267]
[I 20220926 112218 collector.cpp:162] Inserting kernel module /module/collector.ko with indefinite removal and retry if required.
[D 20220926 112218 collector.cpp:109] Kernel module arguments: s_syscallIds=26,27,56,57,246,247,248,249,94,95,14,15,156,157,216,217,222,223,4,5,22,23,12,13,154,155,172,173,214,215,230,231,282,283,288,289,292,293,96,97,182,183,218,219,224,225,16,186,234,194,195,192,193,200,201,198,199,36,37,18,19,184,185,220,221,226,227,-1 verbose=0 exclude_selfns=1 exclude_initns=1
[I 20220926 112218 collector.cpp:183] Done inserting kernel module /module/collector.ko.
[I 20220926 112218 collector.cpp:215] gRPC server=sensor.mcs-security.svc:443
[I 20220926 112218 CollectorService.cpp:50] Config: collection_method:kernel_module, useChiselCache:1, snapLen:0, scrape_interval:30, turn_off_scrape:0, hostname:<redacted>, logLevel:DEBUG
[I 20220926 112218 CollectorService.cpp:79] Network scrape interval set to 30 seconds
[I 20220926 112218 CollectorService.cpp:82] Waiting for GRPC server to become ready ...
[I 20220926 112218 CollectorService.cpp:87] GRPC server connectivity is successful
[D 20220926 112218 ConnTracker.cpp:314] ignored l4 protocol and port pairs
[D 20220926 112218 ConnTracker.cpp:316] udp/9
[I 20220926 112218 NetworkStatusNotifier.cpp:187] Started network status notifier.
[I 20220926 112218 NetworkStatusNotifier.cpp:203] Established network connection info stream.
[D 20220926 112218 SysdigService.cpp:262] Updating chisel and flushing chisel cache
[D 20220926 112218 SysdigService.cpp:263] New chisel:
args = {}
function on_event()
    return true
end
function on_init()
    filter = "not container.id = 'host'\n"
    chisel.set_filter(filter)
    return true
end

[I 20220926 112218 SignalServiceClient.cpp:43] Trying to establish GRPC stream for signals ...
[I 20220926 112218 SignalServiceClient.cpp:61] Successfully established GRPC stream for signals.
[D 20220926 112219 ConnScraper.cpp:406] Could not open process directory 1626873: No such file or directory
[D 20220926 112219 ConnScraper.cpp:406] Could not open process directory 1626877: No such file or directory
[W 20220926 112219 ProtoAllocator.h:41] Allocating a memory block on the heap for the arena, this is inefficient and usually avoidable
collector[0x44746d]
/lib64/libc.so.6(+0x4eb20)[0x7f8425ceeb20]
Caught signal 11 (SIGSEGV): Segmentation fault
/bootstrap.sh: line 94:    11 Segmentation fault      (core dumped) eval exec "$@"
Collector kernel module has already been loaded.
Removing so that collector can insert it at startup.

I am not sure how to debug this as all daemonSet containers experience this problem.

We are using StackRox 3.71.0. I have tried with collector images 3.9.0 and 3.11.0. Please reach out for any missing information.

Refresh SAML 2.0 metadata when dynamic confuguration is chosen

Description

In SAML 2.0 type of authentication provider, we allow users to choose either dynamic configuration or static configuration.

Static configuration requires user to manually input IdP Issuer, IdP SSO URL and IdP Certificate(s) (PEM). Dynamic configuration allows Central to automatically obtain this data from IdP Metadata URL.

At the moment when the dynamic configuration is chosen, we only call IdP Metadata URL once - at the creation of the authentication provider. This makes it necessary to re-create/manually update SAML 2.0 auth provider if IdP Issuer, IdP SSO URL and IdP Certificate(s) (PEM) are changed.

This issue suggests periodically calling IdP Metadata URL to refresh SAML 2.0 auth provider configuration. The call should be done with a reasonable interval so we won't congest the network from one hand, but also refresh values often enough so that user won't get failed login attempts. I suggest 5 minutes as the interval.

Code references

  1. Code calling IdP Metadata URL
    func configureIDPFromMetadataURL(ctx context.Context, sp *saml2.SAMLServiceProvider, metadataURL string) error {
    entityID, descriptor, err := fetchIDPMetadata(ctx, metadataURL)
    if err != nil {
    return errors.Wrap(err, "fetching IdP metadata")
    }
    sp.IdentityProviderIssuer = entityID
    return configureIDPFromDescriptor(sp, descriptor)
    }
    func fetchIDPMetadata(ctx context.Context, url string) (string, *types.IDPSSODescriptor, error) {
    request, err := http.NewRequest(http.MethodGet, url, nil)
    if err != nil {
    return "", nil, errors.Wrap(err, "could not create HTTP request")
    }
    httpClient := http.DefaultClient
    if stringutils.ConsumeSuffix(&request.URL.Scheme, "+insecure") {
    httpClient = insecureHTTPClient
    }
    resp, err := httpClient.Do(request.WithContext(ctx))
    if err != nil {
    return "", nil, errors.Wrap(err, "fetching metadata")
    }
    defer func() {
    _ = resp.Body.Close()
    }()
    var descriptors entityDescriptors
    if err := xml.NewDecoder(resp.Body).Decode(&descriptors); err != nil {
    return "", nil, errors.Wrap(err, "parsing metadata XML")
    }
    if len(descriptors) != 1 {
    return "", nil, errors.Errorf("invalid number of entity descriptors in metadata response: expected exactly one, got %d", len(descriptors))
    }
    desc := descriptors[0]
    if desc.IDPSSODescriptor == nil {
    return "", nil, errors.New("metadata contains no IdP SSO descriptor")
    }
    if !desc.ValidUntil.IsZero() && !desc.ValidUntil.After(time.Now()) {
    return "", nil, fmt.Errorf("IdP metadata has expired at %v", desc.ValidUntil)
    }
    return desc.EntityID, desc.IDPSSODescriptor, nil
    }
  2. Backend for SAML 2.0 authentication provider https://github.com/stackrox/stackrox/blob/8ca46e7fe19afee5d07e76eef118b03a6329b0f1/pkg/auth/authproviders/saml/backend_impl.go
  3. Potential place to insert refresh in code(before constructing login URL):
    func (p *backendImpl) loginURL(clientState string) (string, error) {
    doc, err := p.sp.BuildAuthRequestDocument()
    if err != nil {
    return "", errors.Wrap(err, "could not construct auth request")
    }
    authURL, err := p.sp.BuildAuthURLRedirect(idputil.MakeState(p.id, clientState), doc)
    if err != nil {
    return "", errors.Wrap(err, "could not construct auth URL")
    }
    return authURL, nil
    }

Note: https://github.com/stackrox/stackrox/blob/8ca46e7fe19afee5d07e76eef118b03a6329b0f1/pkg/auth/authproviders/saml/backend_impl.go can be used concurrently. Refresh should occur only when no other users are trying to log in - this can be achieved by adding lock to the backendImpl struct.

Nil pointer ref in nvdCvssv2ToProtoCvssv2

Hello! We were trying to get stackrox set up on OpenShift and ended up with crash loop backoff for the central pod with a nil pointer ref:

cve/fetcher: 2022/10/20 17:11:29.292137 manager_impl.go:55: Info: successfully copied preloaded CVE istio files to persistent volume: "/var/lib/stackrox/cve/istio"
cve/fetcher: 2022/10/20 17:11:29.292242 orchestrator.go:62: Info: Found 0 clusters to scan for orchestrator vulnerabilities.
cve/fetcher: 2022/10/20 17:11:29.293043 orchestrator.go:237: Info: Successfully fetched 0 Kubernetes CVEs
cve/fetcher: 2022/10/20 17:11:29.293278 orchestrator.go:237: Info: Successfully fetched 0 OpenShift CVEs
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x33d357d]
goroutine 140 [running]:
github.com/stackrox/rox/central/cve/converter/utils.nvdCvssv2ToProtoCvssv2(0x0)
github.com/stackrox/rox/central/cve/converter/utils/convert_utils.go:173 +0x1d
github.com/stackrox/rox/central/cve/converter/utils.NvdCVEToEmbeddedCVE(0xc00fcc4f40, 0x1)
github.com/stackrox/rox/central/cve/converter/utils/convert_utils.go:128 +0xd1
github.com/stackrox/rox/central/cve/converter/utils.NvdCVEsToEmbeddedCVEs({0xc00f5f4210, 0x16, 0x0?}, 0x15117?)
github.com/stackrox/rox/central/cve/converter/utils/convert_utils.go:219 +0x97
github.com/stackrox/rox/central/cve/fetcher.(*istioCVEManager).updateCVEs(0xc00090e300?, {0xc00f5f4210, 0x16, 0x16})
github.com/stackrox/rox/central/cve/fetcher/istio.go:71 +0x45
github.com/stackrox/rox/central/cve/fetcher.(*istioCVEManager).initialize(0xc002d7d030)
github.com/stackrox/rox/central/cve/fetcher/istio.go:46 +0xc5
github.com/stackrox/rox/central/cve/fetcher.(*orchestratorIstioCVEManagerImpl).initialize(0xc00f5ca8c0)
github.com/stackrox/rox/central/cve/fetcher/manager_impl.go:58 +0x2ba
github.com/stackrox/rox/central/cve/fetcher.NewOrchestratorIstioCVEManagerImpl({0x722b3e0?, 0xc0095e4000}, {0x0?, 0x0}, {0x7221d20?, 0xc002d7cf50}, {0x721c1c8?, 0xc00f5c5d00}, 0xc00f5db200)
github.com/stackrox/rox/central/cve/fetcher/manager.go:72 +0x372

There was likely something wrong with our config or setup, but figured yall would want to know about a panic. To me it looks the problem is that the pointer to the BaseMetricV2 passed into the method that panicked was nil. The problem may be here?

if nvdCVE.Impact != nil {
		cvssv2, err := nvdCvssv2ToProtoCvssv2(nvdCVE.Impact.BaseMetricV2)

This was on a new setup. Central was installed but Secure Cluster wasn't set up yet. Tagging in @xxlhacker because he did the setup and may know more than me!

Differences between Stackrox open source and the enterprise version

Hi!

I was trying to get more info about the differences between this project and the enterprise version.

How should the deployment be done for this project. stackrox.io/main:3.70.0 requires auth to the registry.

Moreover, are any of the features dropped in the open source version?

Is there on the roadmap to have public container images?

Many thanks!

Add linter check/support for SPDX headers

Goal: Have a linter check that fails if any given source file does not contain an SDPX header

We should have SPDX headers in all of our source files in all repositories that we want to open source.
Example:

// Copyright Red Hat [or: Copyright StackRox Authors [or similar]]
// SPDX-License-Identifier: Apache-2.0

The task is to create a linter check that looks for these headers and fails if they are not present.
Our custom linters are called from the main function in tools/roxvet/roxvet.go and lie in the tools/analyzers repository, which should have plenty of examples.

Internal link: https://issues.redhat.com/browse/ROX-9267

PVC waiting for first consumer to be created before binding

The stackrox/stackrox-central-services:70.1.0 Helm chart is creating a PVC with a helm.sh/hook: pre-install,pre-upgrade. On clusters that have a StorageClass configured with WaitForFirstConsumer as volume binding mode the hook will never finish because the PV that underpins the PVC will only be created once a Deployment tries to mount it.

One example where this is happening is when using the default StorageClass on an Azure AKS cluster.

My proposal to fix this would be to remove the helm hook annotation on the PVC. Let me know if that sounds ok and i'll prepare a PR to that effect.

Local Image Scan in Build Phase

Greetings!
I want to thank all the contributors to the project for the excellent work!
I have a question:
How can the image be scanned and/or checked for consistency locally in the pipeline during the build phase?

I would like to scan the newly created image in ci/cd pipeline without using registry publishing.

Are there plans to add this functionality avoiding the use of plugins for jenkins, etc.?

I would suggest this approach:
Create a "RoxScannerCLI" binary with the following functional commands:
RoxScannerCLI [command].

  1. Scan - scan local image by name/tag/assembly
  2. Check - check if the image corresponds to the compliance policies configured in StackRox.
  3. Daemon - continuous image scanner.
  4. Help - help:)

Flags:

  1. --StackRoxServer - StackRox server address, get policies from it, upload scan results there and agree with it to stop/continue pipelining.
  2. --ApiToken - well, it is clear here:)
  3. -no-verify - do not validate certificate

Why?
This method will allow to prevent vulnerable, non-compliant images from being published to the repository at a very early stage (at the build stage). I think this fits perfectly into the shift left paradigm.

Azure ACR integration node managed identity (NMI) support

Just like AWS have kube2iam and similar solution (https://docs.openshift.com/acs/3.69/integration/integrate-with-image-registries.html#use-assumerole-with-ecr) Azure got aad pod identity.

Instead of using a service account with a annotation as you do in AWS you label your pod.
Here we can find a example implementation how this was solved in aquasecurity fanal aquasecurity/fanal#371 and indirectly in to trivy.

You can find more information here:

https://docs.microsoft.com/en-us/azure/aks/operator-best-practices-identity#use-pod-managed-identities

Add Rocky And Alma Linux Scanner Support

Hello!
I bet there are a few who'd like this, it'd be great to support Rocky and Alma Linux (or at least partially). Specifically to have these image releases and there components identified by the scanner.

Auto-generated internal image registry on a cluster causes central to use the image registry service IP

Hi team,

we currently face an issue in our lab environment where we have

  • an OCP platform A hosting the central and scanner
  • an OCP platform B hosting a secured cluster with sensor, scanner (db), admission controller, collectors

When sensor from platform B starts sending information to platform A, it autogenerates several entries under "Platform Configuration" -> "Integrations" -> "Generic Docker Registry"

image

Now this causes the central to produce several error logs:

sensor/service/connection: 2022/07/26 19:08:45.687299 worker_queue.go:59: Error: Error handling sensor message: error processing message from sensor error: reaching out for TLS check to 172.30.161.169:5000: dial tcp 172.30.161.169:5000: i/o timeout
sensor/service/connection: 2022/07/26 19:08:46.742447 worker_queue.go:59: Error: Error handling sensor message: error processing message from sensor error: reaching out for TLS check to 172.30.161.169:5000: dial tcp 172.30.161.169:5000: connect: no route to host
sensor/service/connection: 2022/07/26 19:18:26.630443 worker_queue.go:59: Error: Error handling sensor message: error processing message from sensor error: reaching out for TLS check to 172.30.161.169:5000: dial tcp 172.30.161.169:5000: i/o timeout
sensor/service/connection: 2022/07/26 19:18:27.734492 worker_queue.go:59: Error: Error handling sensor message: error processing message from sensor error: reaching out for TLS check to 172.30.161.169:5000: dial tcp 172.30.161.169:5000: connect: no route to host
pkg/images/enricher: 2022/07/26 19:18:43.754329 enricher_impl.go:248: Info: Getting metadata for image image-registry.openshift-image-registry.svc:5000/mcs-lifecycle-check/openshift-hello@sha256:19b819016cd1726e8cf519e3b34069baf055ae815d8a4e5b91ab80090487b809
pkg/images/enricher: 2022/07/26 19:18:44.808807 enricher_impl.go:602: Error: Error fetching image signatures for image "image-registry.openshift-image-registry.svc:5000/mcs-lifecycle-check/openshift-hello@sha256:19b819016cd1726e8cf519e3b34069baf055ae815d8a4e5b91ab80090487b809": Get "https://image-registry.openshift-image-registry.svc:5000/v2/": http: non-successful response (status=401 body="{\"errors\":[{\"code\":\"UNAUTHORIZED\",\"message\":\"authentication required\",\"detail\":null}]}\n")

Now I have two theories and wanted to clarify here.

  1. the central tries to actually reach out to the image-registry on platform B:

sensor/service/connection: 2022/07/26 19:18:27.734492 worker_queue.go:59: Error: Error handling sensor message: error processing message from sensor error: reaching out for TLS check to 172.30.161.169:5000: dial tcp 172.30.161.169:5000: connect: no route to host

On Platform A:

balpert@omega:~/rox-debug/sensor$ oc status
In project default on server https://api.t001.otc.mcs-paas.dev:6443

svc/openshift - kubernetes.default.svc.cluster.local
svc/kubernetes - 172.30.0.1:443 -> 6443

balpert@omega:~/rox-debug/sensor$ oc get service -n openshift-image-registry image-registry
NAME             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
image-registry   ClusterIP   172.30.231.213   <none>        5000/TCP   2y263d

On platform B

balpert@omega:~/rox-debug/sensor$ oc status
In project default on server https://api.t007.otc.mcs-paas.dev:6443

svc/openshift - kubernetes.default.svc.cluster.local
svc/kubernetes - 172.30.0.1:443 -> 6443

balpert@omega:~/rox-debug/sensor$ oc get svc -n openshift-image-registry image-registry
NAME             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
image-registry   ClusterIP   172.30.161.169   <none>        5000/TCP   2y261d

But I have verified that I can reach the image-registry from an example pod on platform B in namespace stackrox:

balpert@omega:~/rox-debug/sensor$ oc rsh -n stackrox example
sh-4.4$ curl https://image-registry.openshift-image-registry.svc.cluster.local:5000 -kI
HTTP/2 200 
cache-control: no-cache
date: Tue, 26 Jul 2022 19:49:01 GMT
  1. the central is reaching out to the image registry on platform A instead
    Now what bothers me as well is the second part of the logs above:
pkg/images/enricher: 2022/07/26 19:18:44.808807 enricher_impl.go:602: Error: Error fetching image signatures for image "image-registry.openshift-image-registry.svc:5000/mcs-lifecycle-check/openshift-hello@sha256:19b819016cd1726e8cf519e3b34069baf055ae815d8a4e5b91ab80090487b809": Get "https://image-registry.openshift-image-registry.svc:5000/v2/": http: non-successful response (status=401 body="{\"errors\":[{\"code\":\"UNAUTHORIZED\",\"message\":\"authentication required\",\"detail\":null}]}\n")

This makes me believe that central is trying to resolve image-registry.openshift-image-registry.svc but receives the service IP from platform A. Now I don't see any evidence for this apart of that there is no autogenerated image registry for platform A (or it is not shown under "Integrations" -> "Docker Registry").

Hopefully someone can clarify how I am supposed to set up the secured cluster to actually scan the image registry on platform B.

Best regards

eBPF Probe error on Digital Ocean Kubernetes cluster

Hello, I have installed Stackrox on a DO kubernetes cluster. The collector pods are bouncing between Running and CrashLoopBackOff due to the below error.

[I 20220826 205933 CollectorConfig.cpp:149] User configured collection-method=ebpf
[I 20220826 205933 CollectorConfig.cpp:206] Afterglow is enabled
[I 20220826 205933 collector.cpp:302] Module version: 2.0.1
[I 20220826 205934 collector.cpp:329] Attempting to download eBPF probe - Candidate kernel versions:
[I 20220826 205934 collector.cpp:331] 5.10.0-0.bpo.15-amd64
[I 20220826 205934 GetKernelObject.cpp:180] Local storage does not contain collector-ebpf-5.10.0-0.bpo.15-amd64.o
[I 20220826 205934 FileDownloader.cpp:316] Fail to download /module/collector-ebpf.o.gz - Failed writing body (0 != 10)
[I 20220826 205934 FileDownloader.cpp:318] HTTP Request failed with error code '404' - HTTP Body Response: not found

[I 20220826 205935 FileDownloader.cpp:316] Fail to download /module/collector-ebpf.o.gz - Failed writing body (0 != 10)
[I 20220826 205935 FileDownloader.cpp:318] HTTP Request failed with error code '404' - HTTP Body Response: not found

..........




[W 20220826 210003 FileDownloader.cpp:332] Failed to download /module/collector-ebpf.o.gz
[W 20220826 210003 GetKernelObject.cpp:183] Unable to download kernel object collector-ebpf-5.10.0-0.bpo.15-amd64.o to /module/collector-ebpf.o.gz
[W 20220826 210003 collector.cpp:343] Error getting kernel object: collector-ebpf-5.10.0-0.bpo.15-amd64.o
[I 20220826 210003 collector.cpp:215] gRPC server=sensor.stackrox.svc:443
[I 20220826 210003 collector.cpp:357] Attempting to connect to GRPC server
[E 20220826 210003 collector.cpp:359] Unable to connect to the GRPC server.
[F 20220826 210003 collector.cpp:368] No suitable kernel object downloaded

How can I troubleshoot?

Policy image exclusion

When creating a policy in ACS (v3.71.0), is it possible to exclude images by registry or repo?

It would appear (from the "policy scope" page) that I can only do this by selecting (one or many) individual images, which are then referencing specific tags?

image

As shown here, I think I'm limited to the specific image versions in the list:

image

When I select the images, it cannot remove tags to make it more generic, i.e., by image repository or at the registry level.

For example, perhaps I want a policy "don't allow root user" to be applied to everything except my image examplereg.com/rootimage and I don't want this to be limited to just the current version, because future versions will need the same exemption.

Or, I might want a policy based on a specific CVSS rating threshold to apply to all images in my dev registry devexample.com/*, but I want a policy with a different CVSS rating threshold applied to my Pre-Production registry preprodexample.com/*.

Have I missed where I can do this, or is this functionality missing from ACS/Stackrox?

roxctl generate netpol adds`status: {}` metadata. Breaks kubectl apply command.

The generated network policy YAML files contain the status: {} metadata at the bottom of the files.

When using the kubectl apply -f command, it generates a error: error validating "FILE": error validating data: ValidationError(NetworkPolicy): unknown field "status" in io.k8s.api.networking.v1.NetworkPolicy; if you choose to ignore these errors, turn validation off with --validate=false

I can either delete the metadata or use --validate=false to get around it, but it is a minor annoyance.

kubectl version: WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.2", GitCommit:"5835544ca568b757a8ecae5c153f317e5736700e", GitTreeState:"clean", BuildDate:"2022-09-21T14:25:45Z", GoVersion:"go1.19.1", Compiler:"gc", Platform:"darwin/amd64"} Kustomize Version: v4.5.7 Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5+3afdacb", GitCommit:"3c28e7a79b58e78b4c1dc1ab7e5f6c6c2d3aedd3", GitTreeState:"clean", BuildDate:"2022-05-10T16:30:48Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd64"}

A better way to install stackrox operator without OLM

Today stackrox you have a great way of installing the stackrox operator through OLM.
But when trying to install the operator through kubernetes it's not as easy.

As you mentioned on the first community meeting everyone knows how to use helm charts to it probably would be a good way to solve this problem.

The bad thing is of course that we would have to maintain yet another helm chart.

Stackrox-chart is already taken by redhat-cop and is a helm chart to install the operator through OLM in a gitops scenario:
https://artifacthub.io/packages/helm/redhat-cop/stackrox-chart
https://github.com/redhat-cop/helm-charts/tree/master/charts/stackrox

But a good name could be as simple as stackrox-operator.

Why?
To be able to grow the community outside of Openshift we need to provide the kubernetes users with a good way of installing the operator.

Condition to evaluate central deployments' readiness

Currently, stackrox has a couple of possible ConditionReasons we can use to check if the Central is healthy. These are defined in

const (
ConditionInitialized ConditionType = "Initialized"
ConditionDeployed ConditionType = "Deployed"
ConditionReleaseFailed ConditionType = "ReleaseFailed"
ConditionIrreconcilable ConditionType = "Irreconcilable"
StatusTrue ConditionStatus = "True"
StatusFalse ConditionStatus = "False"
StatusUnknown ConditionStatus = "Unknown"
ReasonInstallSuccessful ConditionReason = "InstallSuccessful"
ReasonUpgradeSuccessful ConditionReason = "UpgradeSuccessful"
ReasonUninstallSuccessful ConditionReason = "UninstallSuccessful"
ReasonInstallError ConditionReason = "InstallError"
ReasonUpgradeError ConditionReason = "UpgradeError"
ReasonReconcileError ConditionReason = "ReconcileError"
ReasonUninstallError ConditionReason = "UninstallError"
)

We use GitOps through ArgoCD and implemented this custom health check for the Central CRD:
image

The problem is, that the InstallSuccessful ConditionReason is set before the deployments, managed by central, are healthy:image

Ideally we'd want the the central resource to only show as healthy after its deployments (and other resources it manages) are healthy as well. I would propose to implement the existing conditions with a some sort of "Ready" ConditionReason, for when all the other resources managed by central are healthy.

sensor - detector.go:607: Error: Error looking up destination entity details while running network flow policy

Hey all - Have a simple deployment of OpenShift w/ Stackrox and I see this int he Sensor logs:

common/detector: 2022/09/23 19:22:25.767675 detector.go:602: Error: Error looking up source entity details while running network flow policy: Deployment with ID: "6114804e-b8e4-46a6-90d8-ca68b0f4e0b4" not found while trying to run 
network flow policy
common/detector: 2022/09/23 19:22:25.767722 detector.go:607: Error: Error looking up destination entity details while running network flow policy: Deployment with ID: "6114804e-b8e4-46a6-90d8-ca68b0f4e0b4" not found while trying to
 run network flow policy
common/detector: 2022/09/23 19:22:25.767578 detector.go:602: Error: Error looking up source entity details while running network flow policy: Deployment with ID: "6114804e-b8e4-46a6-90d8-ca68b0f4e0b4" not found while trying to run 
network flow policy
common/detector: 2022/09/23 19:22:25.767678 detector.go:607: Error: Error looking up destination entity details while running network flow policy: Deployment with ID: "6114804e-b8e4-46a6-90d8-ca68b0f4e0b4" not found while trying to
 run network flow policy
common/detector: 2022/09/23 19:22:25.767719 detector.go:607: Error: Error looking up destination entity details while running network flow policy: Deployment with ID: "6114804e-b8e4-46a6-90d8-ca68b0f4e0b4" not found while trying to
 run network flow policy
common/detector: 2022/09/23 19:22:25.767741 detector.go:607: Error: Error looking up destination entity details while running network flow policy: Deployment with ID: "6114804e-b8e4-46a6-90d8-ca68b0f4e0b4" not found while trying to
 run network flow policy
common/detector: 2022/09/23 19:22:25.767746 detector.go:602: Error: Error looking up source entity details while running network flow policy: Deployment with ID: "6114804e-b8e4-46a6-90d8-ca68b0f4e0b4" not found while trying to run 
network flow policy
common/detector: 2022/09/23 19:22:25.767752 detector.go:602: Error: Error looking up source entity details while running network flow policy: Deployment with ID: "6114804e-b8e4-46a6-90d8-ca68b0f4e0b4" not found while trying to run 
network flow policy
common/detector: 2022/09/23 19:22:25.767769 detector.go:602: Error: Error looking up source entity details while running network flow policy: Deployment with ID: "6114804e-b8e4-46a6-90d8-ca68b0f4e0b4" not found while trying to run 
network flow policy
common/detector: 2022/09/23 19:22:25.767812 detector.go:607: Error: Error looking up destination entity details while running network flow policy: Deployment with ID: "6114804e-b8e4-46a6-90d8-ca68b0f4e0b4" not found while trying to
 run network flow policy
common/detector: 2022/09/23 19:22:25.767975 detector.go:607: Error: Error looking up destination entity details while running network flow policy: Deployment with ID: "6114804e-b8e4-46a6-90d8-ca68b0f4e0b4" not found while trying to
 run network flow policy

Doesn't seem like it is impacting the overall use of the sensor, but i am curious if it is missing metrics?

clair endpoint for integration

Which clair endpoint is supposed to be used for the integration with stackrox?

I've a clair instance running, but use a bare hostname or https://hostname in the Endpoint configuration just gives 404

Feature Request: Allow exemptions to Exempt by Username/Groups

Currently you can only create policies that exempt particular usernames/groups from taking a particular action by modifying criteria (and generally duplicating policies)(source). This is non-ideal for two reasons. First it's a bit of an anti-pattern to create different criteria to do exemptions. Second with some policies being uneditable you'd have to duplicate policies which could lead to drift in the detection logic that would be undesired.

Ideally the exclusions logic would support something that looks like this:

    {
      "name": "Don't Alert for blah blah blah",
      "deployment": {
        "name": "",
        "scope": {
          "cluster": "",
          "namespace": "action_expected",
          "label": null
        },
        "actor": {
           "username" : "some username or username regex",
           "groups" : "some group or group regex"
        }
      },
      "image": null,
      "expiration": null
    },

Provide a way to easily build collector drivers

Currently, the collector component of stackrox uses a kernel module or eBPF probe (referred to as drivers from this point on), in order to gather information on running processes, network connections, etc.

Because these drivers need to be build for a specific kernel, members of the community would either need to:

  • Run stackrox in one of the platforms we support (we do support most major distributions as well as cloud providers).
  • Run stackrox with collector disabled/in a crashloop, missing some of the stackrox functionality.
  • Build their own driver and supply it to stackrox at runtime.

The last point is the subject of this issue, I believe it would be a nice addition to have a user friendly way for community members to compile their own drivers. Some of the potential solutions for this I can think of:

  • Provide a docker image that could be run with the kernel headers and collector code mounted on it, leaving the complied drivers in the host. This could further be improved with a make target to not only compile the drivers, but also tag a collector image with those drivers embedded in it and ready to be used in a local deployment. Alternatively we could come up with a way to create a support package with those same drivers that could be uploaded to central through roxctl.
  • Create a way for users to add kernels to be compiled and distributed through channels similar to how we distribute our supported drivers. This is similar to how Falco maintains their community drivers, but it would incur in extra expenses and effort for stackrox to support and maintain said system.

There are of course some extra wrinkles that might need to be ironed out, for instance some kubernetes test tools provide their own VM with the kernel headers for it distributed as a layer in a separate image, but I think having a simple approach (even if somewhat clunky initially) could encourage the community to build and improve upon it.

Retry transient download failures in operator build process (ROX-12397)

Example curl failure:

$ make -C operator kuttl
make: Entering directory '/go/src/github.com/stackrox/stackrox/operator'
push_pin
+ mkdir -p bin
+ curl --fail --location --output /go/src/github.com/stackrox/stackrox/operator//bin/kubectl-kuttl-0.11.0-verbose-resource https://github.com/porridge/kuttl/releases/download/v0.11.0-verbose-resource/kubectl-kuttl_0.11.0-verbose-resource_linux_x86_64
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0

 72 43.8M   72 31.5M    0     0   138M      0 --:--:-- --:--:-- --:--:--  138M
curl: (56) OpenSSL SSL_read: SSL_ERROR_SYSCALL, errno 104
make: *** [Makefile:196: kuttl] Error 56
make: Leaving directory '/go/src/github.com/stackrox/stackrox/operator'

It would be great to try and add a shell retry loop around the curl invocation to retry such transient issues.

Scanner build-updater generate-dump pulls from private gcs bucket

When executing scanner build-updater generate-dump as part of the build instructions, it attempts to pull data from the stackrox-scanner-feed gcs bucket.

./bin/updater generate-dump --out-file image/scanner/dump/dump.zip

Which results in the following error:

ERRO[0050] an error occurred when fetching update        error="StackRox updater cannot be run without a service account" updater name=stackrox

Though I did not supply a gsa, I would think I'd need to request access to this bucket if I did. Would it make sense to use a publicly accessible endpoint instead? Let me know if I'm completely off base here.

Fix shell scripts to pass shellcheck linter

In scripts/style/shellcheck_skip.txt we have a list of not linted files. The goal of this issue is to get them fixed to pass shellcheck. To run shellcheck use make shell-style. Ideally every PR should fix one file:

Cannot use OpenShift OAuth with OKD 4.8

Hi,

I am trying to set up Stackrox with OpenShift OAuth as as Auth providers.

When I try to add the provider I'll get the following error:

unable to create an auth provider instance: unable to create backend for provider id xxxx-xxx-xxxx-xxxx: failed to create dex openshiftConnector for OpenShift's OAuth Server: failed to query OpenShift endpoint: Get "https://openshift.default.svc/.well-known/oauth-authorization-server": Service Unavailable

I am not sure where StackRox get this endpoint from, I guess this is hardcoded?

The central pod logs shows nothing regarding this task.

What can I provide to solve the issue?

OKD version: 4.8
StackRox: 3.71 installed via Helm

Regards

central crashes at startup with quay.io/stackrox-io/main:3.72.1 (also: 3.72.0)

Hello,

the central pod crashes while starting up:

cve/fetcher: 2022/10/20 09:34:33.286511 orchestrator.go:237: Info: Successfully fetched 0 OpenShift CVEs
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x33b2efd]

goroutine 117 [running]:
github.com/stackrox/rox/central/cve/converter/utils.nvdCvssv2ToProtoCvssv2(0x0)
	github.com/stackrox/rox/central/cve/converter/utils/convert_utils.go:173 +0x1d

This was an installation with helm stackrox-central-services-71.0.0, upgraded to stackrox-central-services-72.0.0, which then started crashing some time (less than 1 day) after the upgrade. Still crashing with 72.1.0.

Values for helm look like this:

image:
  registry: <proxy for quay.io>/stackrox-io

env:
  proxyConfig: |
    url: http://...
    excludes:
    - ...

central:
  exposure:
     loadBalancer:
       enabled: true

scanner:
  autoscaling:
    disable: true

This is the full log for central:
crash.log

PSP not needed on OpenShift

Helm charts 70.0.0 create PSP resources when installing on OpenShift where PSP are not available/needed. Suggestion to remove PSP when detecting installation to OpenShift.

CVE Scanning for node not possible

Hi,

OCP Version - Central: 4.9.33
OCP Version - Secured Cluster: 4.10.20, 4.9.33, 4.8.19
Stackrox Version: 3.71.0

we are deploying Stackrox on OCP4. The Vulnerability Management Details page for a node of a secured cluster does not show any CVE Data, instead I can only see a message

CVE DATA MAY BE INACCURATE
Node unsupported.
Scanning this node is not supported at this time. Please see the release notes for more information.

Question 1: where can I find the release notes?
Question 2: is there a roadmap for new features? E.g. I saw #2588 for example and am wondering when to expect this in a versioned release
Question 3: how can I enable scans for CoreOS nodes?

image

Scanner localdev error tar unexpected EOF

When running scanner localdev against some test images results in the following the following error.

ERRO[0014] error reading "079bc5e75545bf45253ab44ce73fbd51d96fa52ee799031e60b65a82e89df662/layer.tar": EOF

After doing some digging if I increase the tarutils maxLazyReaderBufferSize to something large enough to avoid disk, the tar is read successfully. I suspect there might be an issue with the disk backed buffer.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.