Code Monkey home page Code Monkey logo

openclarity / kubeclarity Goto Github PK

View Code? Open in Web Editor NEW
1.3K 30.0 157.0 7.72 MB

KubeClarity is a tool for detection and management of Software Bill Of Materials (SBOM) and vulnerabilities of container images and filesystems

License: Apache License 2.0

Go 75.65% HTML 0.11% Makefile 0.77% Shell 0.12% Mustache 0.27% SCSS 4.67% JavaScript 18.41%
kubernetes-security vulnerabilities scanner effortless-integrations sbom supply-chain kubernetes security

kubeclarity's Introduction

KubeClarity Logo

KubeClarity is a tool for detection and management of Software Bill Of Materials (SBOM) and vulnerabilities of container images and filesystems. It scans both runtime K8s clusters and CI/CD pipelines for enhanced software supply chain security.

Table of Contents

Why?

SBOM & Vulnerability Detection Challenges

  • Effective vulnerability scanning requires an accurate Software Bill Of Materials (SBOM) detection:
    • Various programming languages and package managers
    • Various OS distributions
    • Package dependency information is usually stripped upon build
  • Which one is the best scanner/SBOM analyzer?
  • What should we scan: Git repos, builds, container images or runtime?
  • Each scanner/analyzer has its own format - how to compare the results?
  • How to manage the discovered SBOM and vulnerabilities?
  • How are my applications affected by a newly discovered vulnerability?

Solution

  • Separate vulnerability scanning into 2 phases:
    • Content analysis to generate SBOM
    • Scan the SBOM for vulnerabilities
  • Create a pluggable infrastructure to:
    • Run several content analyzers in parallel
    • Run several vulnerability scanners in parallel
  • Scan and merge results between different CI stages using KubeClarity CLI
  • Runtime K8s scan to detect vulnerabilities discovered post-deployment
  • Group scanned resources (images/directories) under defined applications to navigate the object tree dependencies (applications, resources, packages, vulnerabilities)

Features

  • Dashboard
    • Fixable vulnerabilities per severity
    • Top 5 vulnerable elements (applications, resources, packages)
    • New vulnerabilities trends
    • Package count per license type
    • Package count per programming language
    • General counters
  • Applications
    • Automatic application detection in K8s runtime
    • Create/edit/delete applications
    • Per application, navigation to related:
      • Resources (images/directories)
      • Packages
      • Vulnerabilities
      • Licenses in use by the resources
  • Application Resources (images/directories)
    • Per resource, navigation to related:
      • Applications
      • Packages
      • Vulnerabilities
  • Packages
    • Per package, navigation to related:
      • Applications
      • Linkable list of resources and the detecting SBOM analyzers
      • Vulnerabilities
  • Vulnerabilities
    • Per vulnerability, navigation to related:
      • Applications
      • Resources
      • List of detecting scanners
  • K8s Runtime scan
    • On-demand or scheduled scanning
    • Automatic detection of target namespaces
    • Scan progress and result navigation per affected element (applications, resources, packages, vulnerabilities)
    • CIS Docker benchmark
  • CLI (CI/CD)
    • SBOM generation using multiple integrated content analyzers (Syft, cyclonedx-gomod)
    • SBOM/image/directory vulnerability scanning using multiple integrated scanners (Grype, Dependency-track)
    • Merging of SBOM and vulnerabilities across different CI/CD stages
    • Export results to KubeClarity backend
  • API
    • The API for KubeClarity can be found here

Integrated SBOM generators and vulnerability scanners

KubeClarity content analyzer integrates with the following SBOM generators:

KubeClarity vulnerability scanner integrates with the following scanners:

Architecture

Getting Started

KubeClarity Backend

Install using Helm:

  1. Add Helm repo

    helm repo add kubeclarity https://openclarity.github.io/kubeclarity
  2. Save KubeClarity default chart values

    helm show values kubeclarity/kubeclarity > values.yaml
  3. Check the configuration in values.yaml and update the required values if needed. To enable and configure the supported SBOM generators and vulnerability scanners, please check the "analyzer" and "scanner" config under the "vulnerability-scanner" section in Helm values.

  4. Deploy KubeClarity with Helm

    helm install --values values.yaml --create-namespace kubeclarity kubeclarity/kubeclarity -n kubeclarity

    or for OpenShift Restricted SCC compatible install:

    helm install --values values.yaml --create-namespace kubeclarity kubeclarity/kubeclarity -n kubeclarity --set global.openShiftRestricted=true \
      --set kubeclarity-postgresql.securityContext.enabled=false --set kubeclarity-postgresql.containerSecurityContext.enabled=false \
      --set kubeclarity-postgresql.volumePermissions.enabled=true --set kubeclarity-postgresql.volumePermissions.securityContext.runAsUser="auto" \
      --set kubeclarity-postgresql.shmVolume.chmod.enabled=false
  5. Port forward to KubeClarity UI:

    kubectl port-forward -n kubeclarity svc/kubeclarity-kubeclarity 9999:8080
  6. Open KubeClarity UI in the browser: http://localhost:9999/

NOTE
KubeClarity requires these K8s permissions:

Permission Reason
Read secrets in CREDS_SECRET_NAMESPACE (default: kubeclarity) This is allow you to configure image pull secrets for scanning private image repositories.
Read config maps in the KubeClarity deployment namespace. This is required for getting the configured template of the scanner job.
List pods in cluster scope. This is required for calculating the target pods that need to be scanned.
List namespaces. This is required for fetching the target namespaces to scan in K8s runtime scan UI.
Create & delete jobs in cluster scope. This is required for managing the jobs that will scan the target pods in their namespaces.

Uninstall using Helm:

  1. Helm uninstall

    helm uninstall kubeclarity -n kubeclarity
  2. Clean resources

    By default, Helm will not remove the PVCs and PVs for the StatefulSets. Run the following command to delete them all:

    kubectl delete pvc -l app.kubernetes.io/instance=kubeclarity -n kubeclarity

Build and Run Locally with Demo Data

  1. Build UI & backend and start the backend locally (2 options):

    1. Using docker:
      1. Build UI and backend (the image tag is set using VERSION):
        VERSION=test make docker-backend
      2. Run the backend using demo data:
        docker run -p 8080:8080 -e FAKE_RUNTIME_SCANNER=true -e FAKE_DATA=true -e ENABLE_DB_INFO_LOGS=true -e DATABASE_DRIVER=LOCAL ghcr.io/openclarity/kubeclarity:test run
    2. Local build:
      1. Build UI and backend
        make ui && make backend
      2. Copy the built site:
        cp -r ./ui/build ./site
      3. Run the backend locally using demo data:
        FAKE_RUNTIME_SCANNER=true DATABASE_DRIVER=LOCAL FAKE_DATA=true ENABLE_DB_INFO_LOGS=true ./backend/bin/backend run
  2. Open KubeClarity UI in the browser: http://localhost:8080/

CLI

KubeClarity includes a CLI that can be run locally and especially useful for CI/CD pipelines. It allows to analyze images and directories to generate SBOM, and scan it for vulnerabilities. The results can be exported to KubeClarity backend.

Installation

Binary Distribution

Download the release distribution for your OS from the releases page

Unpack the kubeclarity-cli binary, add it to your PATH, and you are good to go!

Docker Image

A Docker image is available at ghcr.io/openclarity/kubeclarity-cli with list of available tags here.

Local Compilation

make cli

Copy ./cli/bin/cli to your PATH under kubeclarity-cli.

SBOM Generation

Usage:

kubeclarity-cli analyze <image/directory name> --input-type <dir|file|image(default)> -o <output file or stdout>

Example:

kubeclarity-cli analyze --input-type image nginx:latest -o nginx.sbom

Optionally a list of the content analyzers to use can be configured using the ANALYZER_LIST env variable seperated by a space (e.g ANALYZER_LIST="<analyzer 1 name> <analyzer 2 name>")

Example:

ANALYZER_LIST="syft gomod" kubeclarity-cli analyze --input-type image nginx:latest -o nginx.sbom

Vulnerability Scanning

Usage:

kubeclarity-cli scan <image/sbom/directoty/file name> --input-type <sbom|dir|file|image(default)> -f <output file>

Example:

kubeclarity-cli scan nginx.sbom --input-type sbom

Optionally a list of the vulnerability scanners to use can be configured using the SCANNERS_LIST env variable seperated by a space (e.g SCANNERS_LIST="<Scanner1 name> <Scanner2 name>")

Example:

SCANNERS_LIST="grype trivy" kubeclarity-cli scan nginx.sbom --input-type sbom

Exporting Results to KubeClarity Backend

To export CLI results to the KubeClarity backend, need to use an application ID as defined by the KubeClarity backend. The application ID can be found in the Applications screen in the UI or using the KubeClarity API.

Exporting SBOM

# The SBOM can be exported to KubeClarity backend by setting the BACKEND_HOST env variable and the -e flag.
# Note: Until TLS is supported, BACKEND_DISABLE_TLS=true should be set.
BACKEND_HOST=<KubeClarity backend address> BACKEND_DISABLE_TLS=true kubeclarity-cli analyze <image> --application-id <application ID> -e -o <SBOM output file>

# For example:
BACKEND_HOST=localhost:9999 BACKEND_DISABLE_TLS=true kubeclarity-cli analyze nginx:latest --application-id 23452f9c-6e31-5845-bf53-6566b81a2906 -e -o nginx.sbom

Exporting Vulnerability Scan Results

# The vulnerability scan result can be exported to KubeClarity backend by setting the BACKEND_HOST env variable and the -e flag.
# Note: Until TLS is supported, BACKEND_DISABLE_TLS=true should be set.

BACKEND_HOST=<KubeClarity backend address> BACKEND_DISABLE_TLS=true kubeclarity-cli scan <image> --application-id <application ID> -e

# For example:
SCANNERS_LIST="grype" BACKEND_HOST=localhost:9999 BACKEND_DISABLE_TLS=true kubeclarity-cli scan nginx.sbom --input-type sbom  --application-id 23452f9c-6e31-5845-bf53-6566b81a2906 -e

Advanced Configuration

SBOM generation using local docker image as input

# Local docker images can be analyzed using the LOCAL_IMAGE_SCAN env variable

# For example:
LOCAL_IMAGE_SCAN=true kubeclarity-cli analyze nginx:latest -o nginx.sbom

Vulnerability scanning using local docker image as input

# Local docker images can be scanned using the LOCAL_IMAGE_SCAN env variable

# For example:
LOCAL_IMAGE_SCAN=true kubeclarity-cli scan nginx.sbom

Private registry support For CLI

The KubeClarity cli can read a config file that stores credentials for private registries.

Example registry section of the config file:

registry:
  auths:
    - authority: <registry 1>
      username: <username for registry 1>
      password: <password for registry 1>
    - authority: <registry 2>
      token: <token for registry 2>

Example registry config without authority: (in this case these credentials will be used for all registries)

registry:
  auths:
    - username: <username>
      password: <password>

Specify config file for CLI

# The default config path is $HOME/.kubeclarity or it can be specified by `--config` command line flag.
# kubeclarity <scan/analyze> <image name> --config <kubeclarity config path>

# For example:
kubeclarity scan registry/nginx:private --config $HOME/own-kubeclarity-config

Private registries support for K8s runtime scan

Kubeclarity is using k8schain of google/go-containerregistry for authenticating to the registries. If the necessary service credentials are not discoverable by the k8schain, they can be defined via secrets described below.

In addition, if service credentials are not located in "kubeclarity" Namespace, please set CREDS_SECRET_NAMESPACE to kubeclarity Deployment. When using helm charts, CREDS_SECRET_NAMESPACE is set to the release namespace installed kubeclarity.

Amazon ECR

Create an AWS IAM user with AmazonEC2ContainerRegistryFullAccess permissions.

Use the user credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_DEFAULT_REGION) to create the following secret:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: ecr-sa
  namespace: kubeclarity
type: Opaque
data:
  AWS_ACCESS_KEY_ID: $(echo -n 'XXXX'| base64 -w0)
  AWS_SECRET_ACCESS_KEY: $(echo -n 'XXXX'| base64 -w0)
  AWS_DEFAULT_REGION: $(echo -n 'XXXX'| base64 -w0)
EOF

Note:

  1. Secret name must be ecr-sa
  2. Secret data keys must be set to AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION

Google GCR

Create a Google service account with Artifact Registry Reader permissions.

Use the service account json file to create the following secret

kubectl -n kubeclarity create secret generic --from-file=sa.json gcr-sa

Note:

  1. Secret name must be gcr-sa
  2. sa.json must be the name of the service account json file when generating the secret
  3. KubeClarity is using application default credentials. These only work when running KubeClarity from GCP.

Merging of SBOM and vulnerabilities across different CI/CD stages

# Additional SBOM will be merged into the final results when '--merge-sbom' is defined during analysis. The input SBOM can be CycloneDX XML or CyclonDX json format.
# For example:
ANALYZER_LIST="syft" kubeclarity-cli analyze nginx:latest -o nginx.sbom --merge-sbom inputsbom.xml

Output Different SBOM Formats

The kubeclarity-cli analyze command can format the resulting SBOM into different formats if required to integrate with another system. The supported formats are:

Format Configuration Name
CycloneDX JSON (default) cyclonedx-json
CycloneDX XML cyclonedx-xml
SPDX JSON spdx-json
SPDX Tag Value spdx-tv
Syft JSON syft-json

WARNING
KubeClarity processes CycloneDX internally, the other formats are supported through a conversion. The conversion process can be lossy due to incompatibilities between formats, therefore not all fields/information are promised to be present in the resulting output.

To configure the kubeclarity-cli to use a format other than the default, the ANALYZER_OUTPUT_FORMAT environment variable can be used with the configuration name from above:

ANALYZER_OUTPUT_FORMAT="spdx-json" kubeclarity-cli analyze nginx:latest -o nginx.sbom

Remote Scanner Servers For CLI

When running the kubeclarity CLI to scan for vulnerabilties, the CLI will need to download the relevant vulnerablity DBs to the location where the kubeclarity CLI is running. Running the CLI in a CI/CD pipeline will result in downloading the DBs on each run, wasting time and bandwidth. For this reason several of the supported scanners have a remote mode in which a server is responsible for the DB management and possibly scanning of the artifacts.

Note

The examples below are for each of the scanners, but they can be combined to run together the same as they can be in non-remote mode.

Trivy

Trivy scanner supports remote mode using the Trivy server. The trivy server can be deployed as documented here: trivy client-server mode. Instructions to install the Trivy CLI are available here: trivy install. The Aqua team provide an offical container image that can be used to run the server in kubernetes/docker which we'll use in the examples here.

To start the server:

docker run -p 8080:8080 --rm aquasec/trivy:0.41.0 server --listen 0.0.0.0:8080

To run a scan using the server:

SCANNERS_LIST="trivy" SCANNER_TRIVY_SERVER_ADDRESS="http://<trivy server address>:8080" ./kubeclarity_cli scan --input-type sbom nginx.sbom

The trivy server also provides token based authentication to prevent unauthorized use of a trivy server instance. You can enable it by running the server with the extra flag:

docker run -p 8080:8080 --rm aquasec/trivy:0.41.0 server --listen 0.0.0.0:8080 --token mytoken

and passing the token to the scanner:

SCANNERS_LIST="trivy" SCANNER_TRIVY_SERVER_ADDRESS="http://<trivy server address>:8080" SCANNER_TRIVY_SERVER_TOKEN="mytoken" ./kubeclarity_cli scan --input-type sbom nginx.sbom

Grype

Grype supports remote mode using grype-server a RESTful grype wrapper which provides an API that receives an SBOM and returns the grype scan results for that SBOM. Grype-server ships as a container image so can be run in kubernetes or via docker standalone.

To start the server:

docker run -p 9991:9991 --rm gcr.io/eticloud/k8sec/grype-server:v0.1.5

To run a scan using the server:

SCANNERS_LIST="grype" SCANNER_GRYPE_MODE="remote" SCANNER_REMOTE_GRYPE_SERVER_ADDRESS="<grype server address>:9991" SCANNER_REMOTE_GRYPE_SERVER_SCHEMES="https" ./kubeclarity_cli scan --input-type sbom nginx.sbom

If Grype server is deployed with TLS you can override the default URL scheme like:

SCANNERS_LIST="grype" SCANNER_GRYPE_MODE="remote" SCANNER_REMOTE_GRYPE_SERVER_ADDRESS="<grype server address>:9991" SCANNER_REMOTE_GRYPE_SERVER_SCHEMES="https" ./kubeclarity_cli scan --input-type sbom nginx.sbom

Dependency Track

See example configuration here

Limitations

  1. Supports Docker Image Manifest V2, Schema 2 (https://docs.docker.com/registry/spec/manifest-v2-2/). It will fail to scan earlier versions.

Roadmap

  • Integration with additional content analyzers (SBOM generators)
  • Integration with additional vulnerability scanners
  • CIS Docker benchmark in UI
  • Image signing using Cosign
  • CI/CD metadata signing and attestation using Cosign and in-toto (supply chain security)
  • System settings and user management

Contributing

Pull requests and bug reports are welcome.

For larger changes please create an Issue in GitHub first to discuss your proposed changes and possible implications.

More more details please see the Contribution guidelines for this project

License

Apache License, Version 2.0

kubeclarity's People

Contributors

akpsgit avatar bauerjs1 avatar boris257 avatar chgl avatar dependabot[bot] avatar erezf-p avatar fhirscher avatar fishkerez avatar frimidan avatar galiail avatar hamishforbes avatar j-zimnowoda avatar jashmehta3300 avatar jmueller42 avatar justaugustus avatar lelia avatar masayaaoyama avatar mesh33 avatar milvito avatar mtcolman avatar nostra avatar oborys avatar pbalogh-sa avatar portshift-admin avatar rafiportshift avatar raoudhalagha avatar rmedvedo avatar tehsmash avatar tgip-work avatar yossicohn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubeclarity's Issues

Features to compare two scanning results of related namespaces

In some cases, Features to compare two scanning results can be helpful.
For example, users can see the first results with vulnerabilities. And then, after Security/DevOps team fixed related (critical) issues and second scanning, it was good to have features that can compare these two scanning results and highlight what was fixed, upgraded, etc.

Docker insecure-registries support

Hello!

Inside of our organisation we have used docker registry without https.
daemon.json: { "insecure-registries":["dist.hosts.rfi:5000"] }

In this case Kubei start failed with following log:
2020-11-22T09:50:25.353962012Z failed execute the request 2020-11-22T09:50:25.354069228Z time="2020-11-22T09:50:25Z" level=error msg="Failed to execute scan: failed to pull image: failed execute the request. url=https://dist.hosts.rfi:5000/v2/vault-kub-init/manifests/7502: Get \"https://dist.hosts.rfi:5000/v2/vault-kub-init/manifests/7502\": http: server gave HTTP response to HTTPS client" 2020-11-22T09:50:25.450931817Z time="2020-11-22T09:50:25Z" level=info msg="response Status: 202 Accepted"

How to configure Kubei to use http instead of https for local registries ?

SecurityContextes & dropping privileges

As initially reported there: #20
And partially fixed here: https://github.com/Portshift/kubei/pull/25/files

Note that every container created by Kubei is also subject to broken securityContexts, leading to Jobs being created while their Pods are stuck:

  Normal   Scheduled  4m42s                  default-scheduler  Successfully assigned registry/scanner-docker-registry-exporter-1042b89a-080d-4dad-b84d-1lwpq8 to compute2
  Normal   Pulling    4m42s                  kubelet            Pulling image "gcr.io/eticloud/k8sec/klar:1.0.16"
  Normal   Pulled     4m16s                  kubelet            Successfully pulled image "gcr.io/eticloud/k8sec/klar:1.0.16" in 26.151366977s
  Normal   Pulling    4m16s                  kubelet            Pulling image "gcr.io/eticloud/k8sec/dockle:1.0.3"
  Normal   Pulled     3m54s                  kubelet            Successfully pulled image "gcr.io/eticloud/k8sec/dockle:1.0.3" in 21.426950017s
  Warning  Failed     3m10s (x5 over 3m54s)  kubelet            Error: container has runAsNonRoot and image will run as root
  Warning  Failed     2m56s (x6 over 4m16s)  kubelet            Error: container has runAsNonRoot and image will run as root

Vulnerability scanners that can't run on secured clusters may miss its target audience.

https://github.com/Portshift/kubei/blob/master/pkg/scanner/job-manager.go#L386 may need some fix, inserting proper SecurityContext while generating jobs pod template. Sadly, I'm no Go expert, ...
Anyone here that would both understand the issue and how to best fix it?

securityContext -- runAsUser "id>0", ....

We have a PSP, so every deployment must have a statement like:

      securityContext:
        runAsUser: 900
        runAsGroup: 900
        runAsNonRoot: true
        privileged: false
        allowPrivilegeEscalation: false
        capabilities:
          drop:
            - all

or if needed
securityContext:
runAsUser: 900
runAsGroup: 900
runAsNonRoot: false
privileged: true
allowPrivilegeEscalation: true
capabilities:
drop:
- all

I suppose, postgres does not need root-privileges?

Api Access

I access the kubeclarity api address that I set up in the kubernetes environment through the 8888 port and I am getting 404 when trying to access the paths according to the swagger directive.

arm64 support

Is your feature request related to a problem? Please describe.
It would be nice if kubeclarity can be installed on arm64 nodes

Describe the solution you'd like
multi arch docker images

Describe alternatives you've considered
currently working with trivy-operator

Failed to scan image ... Reasons: job was timeout

What happened:

Deployed KubeClarity, scanned a namespace and got lots of Failed to scan image ... Reasons: job was timeout for images available on public docker registry, for example:

Failed to scan image "docker.io/bitnami/postgresql:11.7.0-debian-10-r80". Effected pods: matt-dc-postgresql-0/sbu-dev. Reasons: job was timeout. imageID=docker.io/bitnami/postgresql@sha256:4a169cd53f2e6a0631fdb5b194c1d214fb8e598fcd1341ca7b2eb9b533a0bdf2.

What you expected to happen:

The job to succeed or better output as to the error.

How to reproduce it (as minimally and precisely as possible):

Deployed kubeclarity chart to k8s and run a namespace scan.

Are there any error messages in KubeClarity logs?

(e.g. kubectl logs -n kubeclarity --selector=app=kubeclarity)

kubectl logs -n kubeclarity --selector=app=kubeclarity-kubeclarity
time="2022-04-26T14:21:57Z" level=warning msg="job was timeout. imageID=docker.io/library/busybox@sha256:5791f73368915ca6ee6a9aeae5580637b016994dd83a37452c21666daf8c6188" func="github.com/cisco-open/kubei/runtime_scan/pkg/scanner.(*Scanner).waitForResult" file="/build/runtime_scan/pkg/scanner/job_managment.go:142" scanner id=2c739e7c-140d-4533-86e8-e22dc952e67b
time="2022-04-26T14:21:57Z" level=warning msg="job was timeout. imageID=docker.io/bitnami/minideb@sha256:1e37cdd63d5f18621efac660b714f0b99c614cffad222dad5066d17cba678b0f" func="github.com/cisco-open/kubei/runtime_scan/pkg/scanner.(*Scanner).waitForResult" file="/build/runtime_scan/pkg/scanner/job_managment.go:142" scanner id=2c739e7c-140d-4533-86e8-e22dc952e67b
time="2022-04-26T14:31:55Z" level=warning msg="job was timeout. imageID=docker.io/bitnami/bitnami-shell@sha256:d351060db08ccb273993fdd974005387ba4768af1b0e6a0418e4db37f6a13ab1" func="github.com/cisco-open/kubei/runtime_scan/pkg/scanner.(*Scanner).waitForResult" file="/build/runtime_scan/pkg/scanner/job_managment.go:142" scanner id=2c739e7c-140d-4533-86e8-e22dc952e67b
time="2022-04-26T14:31:56Z" level=warning msg="job was timeout. imageID=docker.io/bitnami/bitnami-shell@sha256:abd4894c83238adb0c5acbce407a594562230a7e2e0c3b9ae80698d0b79330d8" func="github.com/cisco-open/kubei/runtime_scan/pkg/scanner.(*Scanner).waitForResult" file="/build/runtime_scan/pkg/scanner/job_managment.go:142" scanner id=2c739e7c-140d-4533-86e8-e22dc952e67b
time="2022-04-26T14:31:56Z" level=warning msg="job was timeout. imageID=docker.io/owasp/dependency-check@sha256:1b4eda20cbe85716517a3ce570594cc0f8c093499242c7ff3b04d4dc352d65e9" func="github.com/cisco-open/kubei/runtime_scan/pkg/scanner.(*Scanner).waitForResult" file="/build/runtime_scan/pkg/scanner/job_managment.go:142" scanner id=2c739e7c-140d-4533-86e8-e22dc952e67b
time="2022-04-26T14:31:56Z" level=warning msg="job was timeout. imageID=docker.io/dependencytrack/apiserver@sha256:21e65edec33af0b6b1b6d258902b361bd491e2bb747c9c9076b3966c0e9e9e0f" func="github.com/cisco-open/kubei/runtime_scan/pkg/scanner.(*Scanner).waitForResult" file="/build/runtime_scan/pkg/scanner/job_managment.go:142" scanner id=2c739e7c-140d-4533-86e8-e22dc952e67b
time="2022-04-26T14:31:56Z" level=warning msg="job was timeout. imageID=docker.io/defectdojo/defectdojo-django@sha256:0ff87f5c667ae164497d2bdfbb6659a1cafeeb7ecfae722389279aa9054ade73" func="github.com/cisco-open/kubei/runtime_scan/pkg/scanner.(*Scanner).waitForResult" file="/build/runtime_scan/pkg/scanner/job_managment.go:142" scanner id=2c739e7c-140d-4533-86e8-e22dc952e67b

Anything else we need to know?:

Environment:

kubectl -n kubeclarity exec deploy/kubeclarity-kubeclarity -- ./backend version
Defaulted container "kubeclarity" out of: kubeclarity, kubeclarity-kubeclarity-wait-for-pg-db (init), kubeclarity-kubeclarity-wait-for-sbom-db (init), kubeclarity-kubeclarity-wait-for-grype-server (init)
Version: v2.1.2
Commit: 263ceb33b46df200305afa82b176e5a91c1c5a4a

k8s.io/kubernetes imports are not recommended

@cisco-open/kubei-maintainers -- From @liggitt in kubernetes/kubernetes#90358 (comment):

go: k8s.io/[email protected] requires ...

This is caused by depending on k8s.io/kubernetes directly as a library, which is not supported. The components intended to be used as libraries are published as standalone modules like k8s.io/api, k8s.io/apimachinery, k8s.io/client-go, etc, and can be referenced directly.

If you want to do this anyway, see discussion in kubernetes/kubernetes#79384 about how you can workaround the v0.0.0 requirements.

Let's please remove the dep from https://github.com/cisco-open/kubei/blob/651fb9a41e88e3c62b2f0ea002a926fea07645f2/go.mod#L18.

Consider the suggestions in https://pkg.go.dev/github.com/google/go-containerregistry/pkg/authn as alternatives.

Add IRSA Support for Pod Access to ECR

Is your feature request related to a problem? Please describe.
Currently the ECR support requires management of separate IAM credentials and kubernetes secrets. Using an IAM Roles for Service Accounts (IRSA) approach would allow the reuse of IAM policies, and remove the need to manage IAM users and Kubernetes secrets.

Describe the solution you'd like
Remove the use of static secrets and use IRSA instead. This approach is supported by Amazon EKS and non-EKS Kubernetes on AWS, with the Amazon EKS Pod Identity Webhook. The approach is described in this blog post.

Describe alternatives you've considered
I used kiam and kube2iam in the past, but both solutions required pod-level access to host-level instance metadata service (IMDS) to use the AWS EC2 host instance profile. Preventing the pods from accessing the AWS EC2 IMDS is considered a best practice, and prevents the pods from gaining access to permission meant for the host.

Additional context
The use of IRSA is considered a best practice when integrating Kubernetes pods to AWS IAM.

Support to ignore "unfixed"

Hi, we would like to exclude reported CVEs caused by upstream binaries that don't have a fix for that yet.

Trivy allows to specify this with the --ignore-unfixed flag.

Or is there a flag for this already, but not yet documented?

Support for dockercfg ImagePullSecret format

Thanks for this project!

It seems there are multiple ways to store ImagePullSecrets, one using the dockerconfigjson format and one using dockercfg. It looks like kubei currently only supports the dockerconfigjson format, as I have jobs that fail to start with a message from the kubelet Error: couldn't find key .dockerconfigjson in Secret default/registrycredentials as it tries to mount the secret into the job's pod.

I haven't been able to tell whether simply doing an if/else for mounting the correct secret value into the pod would solve this issue, or if the format of the docker secret is critical to how kubei authenticates to the image registry.

ADD service account option to spawned scanning containers

Hi Guys I started playing with kubei today however it seems to lack an option to configure the service account to use for the spawned scanning containers. This causes loads of issues when working inside a security-conscious k8s container with restrictive PSP's. Namely, the scans won't start as they load under the default PSP policy, not the policy assigned to the service account.

Fix: allow an argument to be passed along the lines of SCANNER_SERVICE_ACCOUNT which will point to a k8s service account to be used by the scanning agent.

Recommended resources for a kubei container

Hello Team and thanks for sharing this project with the open source community !
I'd like to deploy kubei on my cluster and couldn't see any resources limitations in the deployment object. What are the minimum RAM and CPU a kubie container needs in order to operate well?
Thanks!

can't pull from docker-registry

I see scans that would error:

$ kubectl logs -n xxx scanner-radosgw-yyy
time="2021-07-02T22:59:26Z" level=error msg="failed to execute scan: failed to pull image: failed to parse image response. request url=http://registry.registry.svc.cluster.local:5000/v2/ci/radosgw/manifests/master: docker Registry responded with unsupported Content-Type: response=HTTP/1.1 404 Not Found\r\nContent-Length: 122\r\nContent-Type: application/json; charset=utf-8\r\nDate: Fri, 02 Jul 2021 22:59:26 GMT\r\nDocker-Distribution-Api-Version: registry/2.0\r\nX-Content-Type-Options: nosniff\r\n\r\n. unknown"
time="2021-07-02T22:59:26Z" level=info msg="response Status: 202 Accepted"

I do get an error, curl-ing that URL (using a NodePort / points to the exact service queried by the scanner jobs)

$ curl -vvv http://10.42.253.10:30755/v2/ci/radosgw/manifests/master*   Trying 10.42.253.10...
* TCP_NODELAY set
* Connected to 10.42.253.10 (10.42.253.10) port 30755 (#0)
> GET /v2/ci/radosgw/manifests/master HTTP/1.1
> Host: 10.42.253.10:30755
> User-Agent: curl/7.52.1
> Accept: */*
> 
< HTTP/1.1 404 Not Found
< Content-Type: application/json; charset=utf-8
< Docker-Distribution-Api-Version: registry/2.0
< X-Content-Type-Options: nosniff
< Date: Fri, 02 Jul 2021 23:06:03 GMT
< Content-Length: 122
<
{"errors":[{"code":"MANIFEST_UNKNOWN","message":"OCI manifest found, but accept header does not support OCI manifests"}]}
* Curl_http_done: called premature == 0

Though with proper headers, it looks fine:

$ curl --header 'Accept: application/vnd.oci.image.manifest.v1+json' http://10.42.253.10:30755/v2/ci/radosgw/manifests/master
{"schemaVersion":2,"config":{"mediaType":"application/vnd.oci.image.config.v1+json","digest":"sha256:505e7b55874979f1407fdccfd15b7944bc1ed365f0fe512b81fc54178114f90d","size":914},"layers":[{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:80a48e1aced8a137f1d3eb48f474814f822f2ec6daa5bf3512844f5ca41ca4a0","size":108462063}]}

What can be done?!

"Error: validation: chart.metadata.version "latest" is invalid" with local helm install

What happened:

deploying most recent helm chart throws above error when installing chart locally instead of using remote repo.
repo name has changed to openclarity but chart version there (2.1.2) has not (yet) been updated to include the OpenShift value - as can be verified by using the OpenShift install command from readme with the --dry-run switch

What you expected to happen:

helm chart to deploy normally when using "helm install kubeclarity ./ --set

How to reproduce it (as minimally and precisely as possible):

follow helm install command provided in readme but substitute "./" for "kubeclarity/kubeclarity"

Are there any error messages in KubeClarity logs?

(e.g. kubectl logs -n kubeclarity --selector=app=kubeclarity)
kubeclarity is not yet deployed at this time

Anything else we need to know?:

adding versions for chart and app into charts-yaml results in successful depoyment - suggest not to use "latest" in chart

Environment:

  • Kubernetes version (use kubectl version --short):
    oc version
    Client Version: 4.6.16
    Server Version: 4.9.28
    Kubernetes Version: v1.22.5+a36406b
  • KubeClarity version (use kubectl -n kubeclarity exec deploy/kubeclarity -- ./backend version)
    newer than 2.1.2 (git)
  • Cloud provider or hardware configuration:
    ROKS
  • Others:
    helm version
    version.BuildInfo{Version:"v3.6.2+5.el8", GitCommit:"eb607dd4f123eaedab662cef21008d177f2c3426", GitTreeState:"clean", GoVersion:"go1.15.13"}

Deployment is not successful

What happened:

New Implementation, we are trying to deploy the kubei from scratch we failed and below the description and error log for your reference. Kindly help us to proceed further.

In deployment manifest file we updated the image version and location as below:
containers:
- name: kubei
image: gcr.io/eticloud/k8sec/kubei:1.0.11
imagePullPolicy: Always

How to reproduce it (as minimally and precisely as possible): - Attached complete yaml

Are there any error messages in KubeClarity logs?

(e.g. kubectl logs -n kubeclarity --selector=app=kubeclarity)
kubectl logs -n kubeclarity2 --selector=app=kubei
Error from server (BadRequest): container "kubei" in pod "kubei-f9d94f555-nsd96" is waiting to start: PodInitializing

kubectl get pods -n kubeclarity2
NAME READY STATUS RESTARTS AGE
clair-7b4f7859c-hjvn5 1/1 Running 0 20m
clair-postgres-79f54c9fbc-8nqzx 1/1 Running 0 20m
kubei-f9d94f555-nsd96 0/1 Init:0/1 0 20m

kubectl describe po kubei-f9d94f555-nsd96 -n kubeclarity2
Name: kubei-f9d94f555-nsd96
Namespace: kubeclarity2
Priority: 0
Node: aks-worker1-20663458-vmss000003/10.240.0.5
Start Time: Tue, 26 Apr 2022 13:47:20 +0530
Labels: app=kubei
kubeiShouldScan=false
pod-template-hash=f9d94f555
Annotations:
Status: Pending
IP: 10.240.0.11
IPs:
IP: 10.240.0.11
Controlled By: ReplicaSet/kubei-f9d94f555
Init Containers:
init-clairsvc:
Container ID: containerd://20330f5f83504e63e3ad6bf50c9d44ef92f39642b535e5197d7621dbd37fa8db
Image: yauritux/busybox-curl
Image ID: docker.io/yauritux/busybox-curl@sha256:0bf4479473a6065dcbb0b5caff9aae2e502983a66de20bdc1a07aef95988e1bb
Port:
Host Port:
Args:
/bin/sh
-c
set -x; while [ $(curl -sw '%{http_code}' "http://clair.kubei:6060/v1/namespaces" -o /dev/null) -ne 200 ]; do
echo "waiting for clair to be ready";
sleep 15;
done

State:          Running
  Started:      Tue, 26 Apr 2022 13:47:23 +0530
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
  /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zlb9l (ro)

Containers:
kubei:
Container ID:
Image: gcr.io/eticloud/k8sec/kubei:1.0.11
Image ID:
Ports: 8080/TCP, 8081/TCP
Host Ports: 0/TCP, 0/TCP
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Limits:
cpu: 100m
memory: 100Mi
Requests:
cpu: 10m
memory: 20Mi
Environment:
KLAR_IMAGE_NAME: gcr.io/development-infra-208909/klar:1.0.2
MAX_PARALLELISM: 10
TARGET_NAMESPACE: kube-system
SEVERITY_THRESHOLD: LOW
IGNORE_NAMESPACES: istio-system
DELETE_JOB_POLICY: Never
SCANNER_SERVICE_ACCOUNT:
REGISTRY_INSECURE: false
SHOULD_SCAN_DOCKERFILE: true
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zlb9l (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-zlb9l:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Normal Scheduled 21m default-scheduler Successfully assigned kubeclarity2/kubei-f9d94f555-nsd96 to aks-worker1-20663458-vmss000003
Normal Pulling 21m kubelet Pulling image "yauritux/busybox-curl"
Normal Pulled 21m kubelet Successfully pulled image "yauritux/busybox-curl" in 2.436040922s
Normal Created 21m kubelet Created container init-clairsvc
Normal Started 21m kubelet Started container init-clairsvc

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version --short):
  • KubeClarity version (use kubectl -n kubeclarity exec deploy/kubeclarity -- ./backend version)
  • Cloud provider or hardware configuration:
  • Others:
    kubei.zip

Add direct control of extra labels for scanner jobs / pods

Is your feature request related to a problem? Please describe.
On constrained cluster, i.e., clusters where all nodes are tainted, thus pod spec without tolerations do not have a chance to be scheduled, it is necessary to manipulate job template for kubeclarity's scanners in order to achieve proper scheduling.

However, on those clusters there may be pod-manipulating policies (such as Kyverno's) that enable scheduling of appropriately-labeled pods.

Describe the solution you'd like
Having the ability to add arbitrary labels to scanner jobs and their pods should be enough to achieve more controllability of scanner pod scheduling.

It is desirable to have control of extra labels to add in job template, both here (for pod's labels) and here (for job's labels).

Describe alternatives you've considered
Adding (Kyverno) policy targeting kubeclarity's scanner jobs/pods should be also feasible, but there's a catch: job is labeled with a simple "app: scanner", which my prove too generic to be useful in order to pinpoint ONLY Kubeclarity's pods. Maybe adding some other labels which include kubeclarity's name can make this way more predictable with respect to result.

Credentials not found

What happened:

Trying to scan a pod containing a private image and it fails, public images are scanned.

$ oc logs scanner-zap2docker-stable-b72cafcd-4ccc-47cd-8e79-1fb6--1-jpr67 -n sbu-dev

time="2022-04-26T16:21:19Z" level=debug msg="Credentials not found. image name=uk.icr.io/sbu-pipeline/zap2docker-stable@sha256:6c9d3f2cc80470bb4b54fb4b402ff982905e5cb2f13648b571da37e277540f00." \
func="github.com/cisco-open/kubei/shared/pkg/utils/creds.(*CredExtractor).GetCredentials" file="/build/shared/pkg/utils/creds/extractor.go:78"

What you expected to happen:

I expect the secret (which is available in the namespace being scanned) to be obtained and used.

How to reproduce it (as minimally and precisely as possible):

Deploy kubeclarify v2.1.2 to k8s and perform a namespace scan whereby images within namespace are in a private registry.

Are there any error messages in KubeClarity logs?

$ oc logs scanner-zap2docker-stable-b72cafcd-4ccc-47cd-8e79-1fb6--1-jpr67 -n sbu-dev

time="2022-04-26T16:21:19Z" level=debug msg="Credentials not found. image name=uk.icr.io/sbu-pipeline/zap2docker-stable@sha256:6c9d3f2cc80470bb4b54fb4b402ff982905e5cb2f13648b571da37e277540f00." \
func="github.com/cisco-open/kubei/shared/pkg/utils/creds.(*CredExtractor).GetCredentials" file="/build/shared/pkg/utils/creds/extractor.go:78"

Anything else we need to know?:

Environment:

  • KubeClarity version: v2.1.2

Image tagging in Helm charts

Hi there,

When deploying with Helm I would like to be able to pass a single version tag e.g. "v2.2.0" corresponding to the whole release rather than each image specifying their own versions respectively as it doesn't seem likely that someone would want 2.2.0 of one image but 2.1.0 for another. This is however all dependent on your release process and whether you think such cases will happen in the future i.e. when minor changes happen to one image but not another, but that doesn't look like what you're going for. Alternatives would be to use fixed version tags or enforce some kind of compatibility matrix.

Great project by the way!

Helm chart improvements

Is your feature request related to a problem? Please describe.
The helm chart fails conftest due to missing resource limits and security context

Describe the solution you'd like
specify resource limits for all containers that currently don't:

  1. wait-for-grype-server
  2. wait-for-pg-db
  3. wait-for-sbom-db
  4. postgresql

And specify a securityContext/runAsNonRoot for:

  1. wait-for-sbom-db

Describe alternatives you've considered
Modifying the charts myself which is obviously not sustainable

Additional context
Raw conftest output
`2022-05-19T10:17:13.117Z otomi:global:error Error: FAIL - /tmp/otomi/conftest/helmfile-50.services-b2c1fdac-kubeclarity/kubeclarity/templates/deployment.yaml - containerlimits - Policy: container-limits - container has no resource limits

FAIL - /tmp/otomi/conftest/helmfile-50.services-b2c1fdac-kubeclarity/kubeclarity/templates/deployment.yaml - containerlimits - Policy: container-limits - container has no resource limits

FAIL - /tmp/otomi/conftest/helmfile-50.services-b2c1fdac-kubeclarity/kubeclarity/templates/deployment.yaml - containerlimits - Policy: container-limits - container has no resource limits

FAIL - /tmp/otomi/conftest/helmfile-50.services-b2c1fdac-kubeclarity/kubeclarity/charts/kubeclarity-postgresql/templates/statefulset.yaml - containerlimits - Policy: container-limits - container has no resource limits

FAIL - /tmp/otomi/conftest/helmfile-50.services-b2c1fdac-kubeclarity/kubeclarity/templates/deployment.yaml - pspallowedusers - Policy: psp-allowed-users - Container kubeclarity-kubeclarity-wait-for-sbom-db is attempting to run without a required securityContext/runAsNonRoot or securityContext/runAsUser != 0`

Support for running outside of kubei namespace

To support a shared cluster, is there an option to override the default namespace, kubei? It would be beneficial to be able to run the containers in specific namespaces for specific, namespaced seperated cluster. Then, groups would only see output for their namespaces that have been granted rbac roles.

We easily added kustomize to the deploys (excluded the namespace, serviceaccount, etc) and only left the deployment and service to deploy with namespace and service account overlays. Then, the env variables, already provided, "SCANNER_SERVICE_ACCOUNT" and "TARGET_NAMESPACE" are also overalyed to keep it in the target namespaces and sa.

The code appears to hardcode kubei if I am reading it correctly. It works fine when I deploy the containers only to kubei namesapce.

Kubei container:
time="2022-01-21T17:22:38Z" level=error msg="Failed to get secret. secret=redacted-secret: secrets "redacted_secret" is forbidden: User "system:serviceaccount:customer_sa:custom_namespace" cannot get resource "secrets" in API group "" in the namespace "kubei""

Scanner container:
time="2022-01-21T17:22:51Z" level=error msg="failed to execute scan: failed to scan sbom using Grype Server: failed to send sbom for scan: Post "http://grype-server.kubei:9991/api/scanSBOM\": dial tcp: lookup grype-server.kubei on 10.96.0.10:53: no such host"
time="2022-01-21T17:22:51Z" level=error msg="Failed to send scan results: Post "http://kubei.kubei:8081/result/\": dial tcp: lookup kubei.kubei on 10.96.0.10:53: no such host"

Thanks for the help.

Question about dockerhub and rolling tag

Hello, congrat for the work.
I have a question about analysis on "rolling" tags like latest (or postgres 9.6).
When using postgres:9.6 in my cluster, I actually use postgres:9.6.17 (bad pactrice).
When postgres:9.6.18 is released, and the tags is updated, do Kubei scan the old postgres:9.6.17 as it is in use in my cluster or do it refer to 9.6.18 as it is referred by postgres:9.6 on the public registry ?

Remove high/critical vulnerabilities in kubeclarity's components

Is your feature request related to a problem? Please describe.
Current version of kubeclarity (2.3.0) has some high and/or critical vulnerabilities in some of its components.

In particular:

โฏ grype --add-cpes-if-none --only-fixed --fail-on high ghcr.io/openclarity/kubeclarity:v2.3.0
 โœ” Vulnerability DB        [no update available]
 โœ” Pulled image
 โœ” Loaded image
 โœ” Parsed image
 โœ” Cataloged packages      [164 packages]
 โœ” Scanned image           [3 vulnerabilities]

NAME                              INSTALLED  FIXED-IN  TYPE       VULNERABILITY        SEVERITY
github.com/containerd/containerd  v1.6.0     1.6.1     go-module  GHSA-crp2-qrr5-8pq7  High
โฏ grype --add-cpes-if-none --only-fixed --fail-on high docker.io/bitnami/postgresql:11.13.0-debian-10-r40
 โœ” Vulnerability DB        [no update available]
 โœ” Pulled image
 โœ” Loaded image
 โœ” Parsed image
 โœ” Cataloged packages      [122 packages]
 โœ” Scanned image           [254 vulnerabilities]
NAME                            INSTALLED              FIXED-IN                 TYPE       VULNERABILITY        SEVERITY
dpkg                            1.19.7                 1.19.8                   deb        CVE-2022-1664        Unknown
github.com/opencontainers/runc  v1.0.1                 1.1.2                    go-module  GHSA-f3fp-gc8g-vw66  Medium
github.com/opencontainers/runc  v1.0.1                 1.0.3                    go-module  GHSA-v95c-p5hm-xq8f  Medium
gzip                            1.9-3                  1.9-3+deb10u1            deb        CVE-2022-1271        Unknown
libgmp10                        2:6.1.2+dfsg-4         2:6.1.2+dfsg-4+deb10u1   deb        CVE-2021-43618       High
libgssapi-krb5-2                1.17-3+deb10u2         1.17-3+deb10u3           deb        CVE-2021-37750       Medium
libicu63                        63.1-6+deb10u1         63.1-6+deb10u2           deb        CVE-2020-21913       Medium
libk5crypto3                    1.17-3+deb10u2         1.17-3+deb10u3           deb        CVE-2021-37750       Medium
libkrb5-3                       1.17-3+deb10u2         1.17-3+deb10u3           deb        CVE-2021-37750       Medium
libkrb5support0                 1.17-3+deb10u2         1.17-3+deb10u3           deb        CVE-2021-37750       Medium
libldap-2.4-2                   2.4.47+dfsg-3+deb10u6  2.4.47+dfsg-3+deb10u7    deb        CVE-2022-29155       Critical
libldap-common                  2.4.47+dfsg-3+deb10u6  2.4.47+dfsg-3+deb10u7    deb        CVE-2022-29155       Critical
liblzma5                        5.2.4-1                5.2.4-1+deb10u1          deb        CVE-2022-1271        Unknown
libsasl2-2                      2.1.27+dfsg-1+deb10u1  2.1.27+dfsg-1+deb10u2    deb        CVE-2022-24407       High
libsasl2-modules-db             2.1.27+dfsg-1+deb10u1  2.1.27+dfsg-1+deb10u2    deb        CVE-2022-24407       High
libssl1.1                       1.1.1d-0+deb10u7       1.1.1d-0+deb10u8         deb        CVE-2022-0778        High
libssl1.1                       1.1.1d-0+deb10u7       1.1.1n-0+deb10u2         deb        CVE-2022-1292        Critical
libssl1.1                       1.1.1d-0+deb10u7       1.1.1d-0+deb10u8         deb        CVE-2021-4160        Medium
libxml2                         2.9.4+dfsg1-7+deb10u2  2.9.4+dfsg1-7+deb10u4    deb        CVE-2022-29824       Medium
libxml2                         2.9.4+dfsg1-7+deb10u2  2.9.4+dfsg1-7+deb10u3    deb        CVE-2022-23308       High
openssl                         1.1.1d-0+deb10u7       1.1.1d-0+deb10u8         deb        CVE-2021-4160        Medium
openssl                         1.1.1d-0+deb10u7       1.1.1d-0+deb10u8         deb        CVE-2022-0778        High
openssl                         1.1.1d-0+deb10u7       1.1.1n-0+deb10u2         deb        CVE-2022-1292        Critical
zlib1g                          1:1.2.11.dfsg-1        1:1.2.11.dfsg-1+deb10u1  deb        CVE-2018-25032       High
โฏ grype --add-cpes-if-none --only-fixed --fail-on high gcr.io/eticloud/k8sec/grype-server:v0.1.2
 โœ” Vulnerability DB        [no update available]
 โœ” Pulled image
 โœ” Loaded image
 โœ” Parsed image
 โœ” Cataloged packages      [133 packages]
 โœ” Scanned image           [11 vulnerabilities]
NAME                              INSTALLED            FIXED-IN  TYPE       VULNERABILITY        SEVERITY
github.com/containerd/containerd  v1.5.9               1.5.10    go-module  GHSA-crp2-qrr5-8pq7  High
github.com/docker/distribution    v2.7.1+incompatible  2.8.0     go-module  GHSA-qq97-vm5h-rrhg  Low
github.com/hashicorp/go-getter    v1.5.9               1.6.1     go-module  GHSA-fcgg-rvwg-jv58  High
github.com/hashicorp/go-getter    v1.5.9               1.6.1     go-module  GHSA-x24g-9w7v-vprh  Critical
github.com/hashicorp/go-getter    v1.5.9               1.5.11    go-module  GHSA-27rq-4943-qcwp  Medium
github.com/hashicorp/go-getter    v1.5.9               1.6.1     go-module  GHSA-28r2-q6m8-9hpx  High
github.com/hashicorp/go-getter    v1.5.9               1.6.1     go-module  GHSA-cjr4-fv6c-f3mv  High

Describe the solution you'd like
Upgrading postgresql (embedded via bitnami's inner chart) and containerd's vulnerability in main kubeclarity image should solve most vulnerabilities; for grype-server, the issue should be addressed elsewhere (here), I have opened an issue there.

Note that 11.16.0-debian-10-r23 hasn't critical/high vulnerabiities, see:

โฏ grype --add-cpes-if-none --only-fixed --fail-on high docker.io/bitnami/postgresql:11.16.0-debian-10-r23

 โœ” Vulnerability DB        [no update available]
 โœ” Pulled image
 โœ” Loaded image
 โœ” Parsed image
 โœ” Cataloged packages      [122 packages]
 โœ” Scanned image           [232 vulnerabilities]

NAME                            INSTALLED  FIXED-IN  TYPE       VULNERABILITY        SEVERITY
github.com/opencontainers/runc  v1.0.1     1.0.3     go-module  GHSA-v95c-p5hm-xq8f  Medium
github.com/opencontainers/runc  v1.0.1     1.1.2     go-module  GHSA-f3fp-gc8g-vw66  Medium

Describe alternatives you've considered
No other solutions for main kubeclarity image; postgresql could be installed stand-alone and connected to kubeclarity (in the following installations I will use that approach).

Additional context
No additional context

Deprecated configurations in kubei's - k8s yaml

The following warnings are shown while applying the kubei's yaml spec.

Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
clusterrole.rbac.authorization.k8s.io/kubei created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding

Sample:
image

Waiting: PodInitializing

hey folks!

irst time I encountered this status of pods on your product

after kubectl apply -f https://raw.githubusercontent.com/Portshift/kubei/master/deploy/kubei.yaml

i run kubectl -n kubei get pod -lapp=kubei

my output:

NAME READY STATUS RESTARTS AGE kubei-65d6577695-mzn6p 0/1 Init:0/1 0 18m

descirbe pod:

kubectl describe pod kubei-65d6577695-mzn6p -n kubei
Name:           kubei-65d6577695-mzn6p
Namespace:      kubei
Priority:       0
Node:           worker2/10.2.67.205
Start Time:     Thu, 06 Aug 2020 14:05:59 +0300
Labels:         app=kubei
                kubeiShouldScan=false
                pod-template-hash=65d6577695
Annotations:    <none>
Status:         Pending
IP:             10.233.103.17
Controlled By:  ReplicaSet/kubei-65d6577695
Init Containers:
  init-clairsvc:
    Container ID:  docker://2e689fc20c3b4b3cacaab228a0f49b33f9b7075d426481655804bf256550f5b3
    Image:         yauritux/busybox-curl
    Image ID:      docker-pullable://yauritux/busybox-curl@sha256:e67b94a5abb6468169218a0940e757ebdfd8ee370cf6901823ecbf4098f2bb65
    Port:          <none>
    Host Port:     <none>
    Args:
      /bin/sh
      -c
      set -x; while [ $(curl -sw '%{http_code}' "http://clair.kubei:6060/v1/namespaces" -o /dev/null) -ne 200 ]; do
        echo "waiting for clair to be ready";
        sleep 15;
      done

    State:          Running
      Started:      Thu, 06 Aug 2020 14:06:04 +0300
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kubei-token-jkw6r (ro)
Containers:
  kubei:
    Container ID:
    Image:          gcr.io/development-infra-208909/kubei:1.0.6
    Image ID:
    Ports:          8080/TCP, 8081/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  100Mi
    Requests:
      cpu:     10m
      memory:  20Mi
    Environment:
      KLAR_IMAGE_NAME:     gcr.io/development-infra-208909/klar:1.0.3
      MAX_PARALLELISM:     10
      TARGET_NAMESPACE:
      SEVERITY_THRESHOLD:  MEDIUM
      IGNORE_NAMESPACES:   istio-system,kube-system
      DELETE_JOB_POLICY:   Successful
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kubei-token-jkw6r (ro)
Conditions:
  Type              Status
  Initialized       False
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kubei-token-jkw6r:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubei-token-jkw6r
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  19m   default-scheduler  Successfully assigned kubei/kubei-65d6577695-mzn6p to worker2
  Normal  Pulling    18m   kubelet, worker2   Pulling image "yauritux/busybox-curl"
  Normal  Pulled     18m   kubelet, worker2   Successfully pulled image "yauritux/busybox-curl"
  Normal  Created    18m   kubelet, worker2   Created container init-clairsvc
  Normal  Started    18m   kubelet, worker2   Started container init-clairsvc
```


![1](https://user-images.githubusercontent.com/38696837/89526820-02114d80-d7f1-11ea-9579-538043bd7493.png)


No severeties

Hi!

After running the scanner, I got some report.

However, it has no severity levels.

Did I something wrong?
1

I also have a lot of dead pods, created by kubernetes jobs

1

Metrics

Hi,
Does kubei expose any /metrics with equivalent info that the web exposes (about vulnerabilities found)?

Was thinking about monitoring and alerting on vulnerabilities found, for example via Prometheus.

Execute scanner jobs in kubeclarity's namespace rather than in originating pod's, or allow setting custom tolerations

Is your feature request related to a problem? Please describe.
Currently, Kubeclarity schedules scanner jobs in the same namespace as the targeted pod's.

While this is crucial for images hosted in private repositories, since this allows setting a secretKeyRef which just works (both originating pod and scanner job/pod execute in the same namespace), it may become an hurdle to execute scanner job/pod in namespaces which requires tolerations to be added in order for the scheduling to succeed.

As an example, on Azure (AKS) clusters, some namespaces such as calico-system, kube-system and so on have all or most of their workloads managed by k8s-addon manager, which in turn adds the following toleration:

- key: CriticalAddonsOnly
  operator: Exists

The latter is required in order to allow to execute pods even when EVERY node in ALL nodegroups is tainted somehow, but at least some nodes (e.g., so-called AKS "system" nodes) are tainted with CriticalAddonsOnly.

However, when Kubeclarity schedules scanner job/pods on such namespaces AND all nodes are tainted (possibly with different taints), pod remains stuck in pending state due to lack of adequate tolerations.

Describe the solution you'd like

I foresee two solutions:

  • a partial solution is to schedule all jobs/pods NOT requiring access to a private repository to kubeclarity namespace, since in that namespace one can apply policies from kyverno's or something alike, in order to add the tolerations needed (in fact, that's we already do in our clusters to enable scheduling of kubeclarity's core pods)
  • a full solution requires adding at least custom tolerations to all job/pods, so that during installation of kubeclarity one can explicitly mention a set of minimum tolerations that allow proper scheduling (such as the abovementioned CriticalAddonsOnly).

Describe alternatives you've considered
Using this feature and adding labels (which, in turn, trigger application of policies to add the tolerations needed) to all possible namespaces could be useful for some namespaces, but for kube-system it shouldn't be enough, since most policy engine, such as Kyverno, do not operate on kube-system by default (and for good reasons, most likely).

Thus custom labels couldn't be enough for scheduling, while remaining useful to tag kubeclarity job/pods.

Additional context
None

How and where to specify imagePullSecrets for scanner-curl job

I need to provide an ImagePullSecret to pull the images for Klar and dockle from an Artifactory remote repository.
Where can this be done? I browsed through the source code, but maybe my Go lang knowledge needs to be improved to get the point. ;)
Any advice would greatly be appreciated.

ACR access error

What happened:

Images from azure container registry are not scanned.
Access to the registry is accessed via app registration (service principal) without secrets in kubernetes

How to reproduce it (as minimally and precisely as possible):

My values to reproduce:

#######################################################################################

Global Values

global:

Database password

databasePassword: kubeclarity

Docker image

docker:
## Configure registry
##
registry: "ghcr.io/openclarity"
tag: "v2.3.0"
imagePullPolicy: Always

Is this being installed under OpenShift restricted SCC?

NOTE: You also need to set the PostgreSQL section correctly if using the OpenShift restricted SCC

openShiftRestricted: false

End of Global Values

#######################################################################################

#######################################################################################

KubeClarity Values

kubeclarity:

Docker Image values.

docker:
## Use to overwrite the global docker params
##
imageName: ""

Logging level (debug, info, warning, error, fatal, panic).

logLevel: warning

enableDBInfoLog: false

service:
type: ClusterIP

resources:
requests:
memory: "200Mi"
cpu: "100m"
limits:
memory: "1000Mi"
cpu: "1000m"

End of KubeClarity Values

#######################################################################################

#######################################################################################

KubeClarity Runtime Scan Values

kubeclarity-runtime-scan:
httpsProxy: ""
httpProxy: ""
resultServicePort: 8888

registry:
skipVerifyTlS: "false"
useHTTP: "false"

cis-docker-benchmark-scanner:
## Docker Image values.
docker:
## Use to overwrite the global docker params
##
imageName: ""

## Scanner logging level (debug, info, warning, error, fatal, panic).
logLevel: warning

## Timeout for the cis docker benchmark scanner job.
timeout: "2m"

resources:
  requests:
    memory: "50Mi"
    cpu: "50m"
  limits:
    memory: "1000Mi"
    cpu: "1000m"

vulnerability-scanner:
## Docker Image values.
docker:
## Use to overwrite the global docker params
##
imageName: ""

## Scanner logging level (debug, info, warning, error, fatal, panic).
logLevel: warning

resources:
  requests:
    memory: "50Mi"
    cpu: "50m"
  limits:
    memory: "1000Mi"
    cpu: "1000m"

## Analyzer config.
analyzer:
  ## Space seperated list of analyzers. (syft gomod)
  analyzerList: "syft gomod"

  analyzerScope: "squashed"

## Scanner config.
scanner:
  ## Space seperated list of scanners. (grype dependency-track)
  scannerList: "grype"

  grype:
    ## Enable grype scanner, if true make sure to add it to scannerList above
    ##
    enabled: true
    ## Grype scanner mode. (LOCAL, REMOTE)
    mode: "REMOTE"

    ## Remote grype scanner config.
    remote-grype:
      timeout: "2m"

  dependency-track:
    ## Enable dependency-track scanner, if true make sure to add it to scannerList above
    ##
    enabled: false
    insecureSkipVerify: "true"
    disableTls: "true"
    apiserverAddress: "dependency-track-apiserver.dependency-track"
    apiKey: ""

End of KubeClarity Runtime Scan Values

#######################################################################################

#######################################################################################

KubeClarity Grype Server Values

kubeclarity-grype-server:
enabled: true

Docker Image values.

docker:
imageRepo: "gcr.io/eticloud/k8sec"
imageTag: "v0.1.2"
imagePullPolicy: Always

Logging level (debug, info, warning, error, fatal, panic).

logLevel: warning

servicePort: 9991

resources:
requests:
cpu: "200m"
memory: "200Mi"
limits:
cpu: "1000m"
memory: "1G"

End of KubeClarity Grype Server Values

#######################################################################################

#######################################################################################

KubeClarity SBOM DB Values

kubeclarity-sbom-db:

Docker Image values.

docker:
## Use to overwrite the global docker params
##
imageName: ""

Logging level (debug, info, warning, error, fatal, panic).

logLevel: warning

servicePort: 8080

resources:
requests:
memory: "20Mi"
cpu: "10m"
limits:
memory: "100Mi"
cpu: "100m"

End of KubeClarity SBOM DB Values

#######################################################################################

#######################################################################################

KubeClarity Postgres Values

kubeclarity-postgresql:
enabled: true

ConfigMap with scripts to be run at first boot

NOTE: This will override initdbScripts

initdbScriptsConfigMap:

Secret with scripts to be run at first boot (in case it contains sensitive information)

NOTE: This can work along initdbScripts or initdbScriptsConfigMap

initdbScriptsSecret:

Specify the PostgreSQL username and password to execute the initdb scripts

initdbUser:

initdbPassword:

Setup database name and password

existingSecret: kubeclarity-postgresql-secret
postgresqlDatabase: kubeclarity

secretKey: postgresql-password

serviceAccount:
enabled: true
securityContext:
# Default is true for K8s. Enabled needs to false for OpenShift restricted SCC and true for anyuid SCC
enabled: true
# fsGroup specification below is not applied if enabled=false. enabled=false is the required setting for OpenShift "restricted SCC" to work successfully.
fsGroup: 1001
containerSecurityContext:
# Default is true for K8s. Enabled needs to false for OpenShift restricted SCC and true for anyuid SCC
enabled: true
# runAsUser specification below is not applied if enabled=false. enabled=false is the required setting for OpenShift "restricted SCC" to work successfully.
runAsUser: 1001
runAsNonRoot: true
volumePermissions:
# Default is true for K8s. Enabled needs to false for OpenShift restricted SCC and true for anyuid SCC
enabled: false
# if using restricted SCC set runAsUser: "auto" and if running under anyuid SCC - runAsUser needs to match the line above
securityContext:
runAsUser: 1001
shmVolume:
chmod:
# if using restricted SCC with runAsUser: "auto" (above) then set shmVolume.chmod.enabled to false
enabled: true

ingress:

dashboard.ingress.enabled -- Whether to enable ingress to the dashboard

enabled: true

dashboard.ingress.ingressClassName -- From Kubernetes 1.18+ this field is supported in case your ingress controller supports it. When set, you do not need to add the ingress class as annotation.

ingressClassName: nginx

dashboard.ingress.hosts -- Web ingress hostnames

host: kubeclarity.dev.inventry.world

dashboard.ingress.annotations -- Web ingress annotations

annotations:
cert-manager.io/cluster-issuer: cert-manager
nginx.ingress.kubernetes.io/auth-signin: https://$host/oauth2/start?rd=$request_uri
nginx.ingress.kubernetes.io/auth-url: http://oauth2-proxy.oauth2-proxy.svc.cluster.local/oauth2/auth

End of KubeClarity Postgres Values

#######################################################################################

Logs:

Date,Service,Kubernetes Namespace,Message

2022-06-14T07:59:58.008Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:54Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/personnel@sha256:ddeb1ce96f32d62c24554e09fc0d9824274f24fd5b7702649d548b367ff32dec"" scan-uuid=a781666c-c46a-4b86-b5aa-68a486f4a26c"
2022-06-14T07:59:58.008Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:53Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/personnel@sha256:ddeb1ce96f32d62c24554e09fc0d9824274f24fd5b7702649d548b367ff32dec': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/personnel@sha256:ddeb1ce96f32d62c24554e09fc0d9824274f24fd5b7702649d548b367ff32dec"" scan-uuid=a781666c-c46a-4b86-b5aa-68a486f4a26c"
2022-06-14T07:59:57.236Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:55Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/licence@sha256:fa46e0e94e5f8ca7d27c329fce8af18d05aa5c4fc513c337efbb1a09060fc669': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/licence@sha256:fa46e0e94e5f8ca7d27c329fce8af18d05aa5c4fc513c337efbb1a09060fc669"" scan-uuid=082b529f-513d-4914-af26-d196f0d18ebe"
2022-06-14T07:59:57.236Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:55Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/licence@sha256:fa46e0e94e5f8ca7d27c329fce8af18d05aa5c4fc513c337efbb1a09060fc669"" scan-uuid=082b529f-513d-4914-af26-d196f0d18ebe"
2022-06-14T07:59:56.803Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:52Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/groupcall@sha256:a6b846d4941e02a565ae97f7f3c946baf3d1e37c191a078789f7f9433f6b8f21': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/groupcall@sha256:a6b846d4941e02a565ae97f7f3c946baf3d1e37c191a078789f7f9433f6b8f21"" scan-uuid=7eae4cc8-914b-4b43-8326-4986a791332c"
2022-06-14T07:59:56.803Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:53Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/groupcall@sha256:a6b846d4941e02a565ae97f7f3c946baf3d1e37c191a078789f7f9433f6b8f21"" scan-uuid=7eae4cc8-914b-4b43-8326-4986a791332c"
2022-06-14T07:59:54.977Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:54Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/sims@sha256:d8ebaf2d14154f963efdace1835a5e4a1bd3783a3b1f0178c532384eca17691e': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/sims@sha256:d8ebaf2d14154f963efdace1835a5e4a1bd3783a3b1f0178c532384eca17691e"" scan-uuid=01efecbe-4a32-40d4-b85f-cb0530a14ae7"
2022-06-14T07:59:54.977Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:54Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/sims@sha256:d8ebaf2d14154f963efdace1835a5e4a1bd3783a3b1f0178c532384eca17691e"" scan-uuid=01efecbe-4a32-40d4-b85f-cb0530a14ae7"
2022-06-14T07:59:53.567Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:53Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/identification@sha256:5176d39aa2c280c208a1130f6aeffe40bc51be320e5d25e440da0552af6205fc"" scan-uuid=3ab307cb-bd41-4819-a0b7-958a23da43cc"
2022-06-14T07:59:53.567Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:53Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/identification@sha256:5176d39aa2c280c208a1130f6aeffe40bc51be320e5d25e440da0552af6205fc': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/identification@sha256:5176d39aa2c280c208a1130f6aeffe40bc51be320e5d25e440da0552af6205fc"" scan-uuid=3ab307cb-bd41-4819-a0b7-958a23da43cc"
2022-06-14T07:59:48.206Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:47Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/pupilevents@sha256:d14222737d1c302f120e371cb08284a5f9391ea651391dde0932297bfc74b5af"" scan-uuid=e8280b8a-a723-4a27-9cb1-b0761e88bca6"
2022-06-14T07:59:48.014Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:45Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/simsprimary@sha256:3813e538542ab2be7d699e32c96f14516e765c048a83de34c18d3d0b1324e463"" scan-uuid=64a824fb-8813-4094-b574-817d649383be"
2022-06-14T07:59:48.014Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:45Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/simsprimary@sha256:3813e538542ab2be7d699e32c96f14516e765c048a83de34c18d3d0b1324e463': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/simsprimary@sha256:3813e538542ab2be7d699e32c96f14516e765c048a83de34c18d3d0b1324e463"" scan-uuid=64a824fb-8813-4094-b574-817d649383be"
2022-06-14T07:59:47.251Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:47Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/pupils@sha256:461ef822651d9ba4b7bb927152ec95c97d577de897cf869db4965403f07d3e9b"" scan-uuid=84882afe-5d9d-4df0-a6a0-6a034a723d33"
2022-06-14T07:59:47.251Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:46Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/pupils@sha256:461ef822651d9ba4b7bb927152ec95c97d577de897cf869db4965403f07d3e9b': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/pupils@sha256:461ef822651d9ba4b7bb927152ec95c97d577de897cf869db4965403f07d3e9b"" scan-uuid=84882afe-5d9d-4df0-a6a0-6a034a723d33"
2022-06-14T07:59:47.205Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:47Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/pupilevents@sha256:d14222737d1c302f120e371cb08284a5f9391ea651391dde0932297bfc74b5af': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/pupilevents@sha256:d14222737d1c302f120e371cb08284a5f9391ea651391dde0932297bfc74b5af"" scan-uuid=e8280b8a-a723-4a27-9cb1-b0761e88bca6"
2022-06-14T07:59:46.729Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:44Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/datasync@sha256:3e9818b71288766d9720d76ce0491ceda724b38d9dd880bb3db25486a69c5076': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/datasync@sha256:3e9818b71288766d9720d76ce0491ceda724b38d9dd880bb3db25486a69c5076"" scan-uuid=48464fe0-0d4f-4efc-a693-54f6cfa82341"
2022-06-14T07:59:46.728Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:44Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/datasync@sha256:3e9818b71288766d9720d76ce0491ceda724b38d9dd880bb3db25486a69c5076"" scan-uuid=48464fe0-0d4f-4efc-a693-54f6cfa82341"
2022-06-14T07:59:45.997Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:45Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/cloudschool@sha256:53ed079f475397e9b501751d8c1526f4f03c2487657bf2b20e9fe2ba408555ef"" scan-uuid=8319ac13-dba4-463c-ada0-4910a5a6986c"
2022-06-14T07:59:44.981Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:44Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/cloudschool@sha256:53ed079f475397e9b501751d8c1526f4f03c2487657bf2b20e9fe2ba408555ef': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/cloudschool@sha256:53ed079f475397e9b501751d8c1526f4f03c2487657bf2b20e9fe2ba408555ef"" scan-uuid=8319ac13-dba4-463c-ada0-4910a5a6986c"
2022-06-14T07:59:42.583Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:42Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/misactivedirectory@sha256:4b99e26c1a8f8aac94751f3d6738308b87965bf47b422c5b0947fb9008c01fb5"" scan-uuid=d3138719-47b3-4d3a-9bc6-5174a896880c"
2022-06-14T07:59:42.583Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:41Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/misactivedirectory@sha256:4b99e26c1a8f8aac94751f3d6738308b87965bf47b422c5b0947fb9008c01fb5': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/misactivedirectory@sha256:4b99e26c1a8f8aac94751f3d6738308b87965bf47b422c5b0947fb9008c01fb5"" scan-uuid=d3138719-47b3-4d3a-9bc6-5174a896880c"
2022-06-14T07:59:41.255Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:40Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/arbor@sha256:2baada398b82101a09aaaad34e4c4d397a6be144cf873ee5de2e0c5545e43996"" scan-uuid=f5ca81ce-7122-4c4c-a462-d4244150d649"
2022-06-14T07:59:41.249Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:40Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/isams@sha256:f020d0e90c1f5855ec0f4d03744180c00435fb73c1deb3f850f6c72df9f4e319"" scan-uuid=6ea81315-5931-4902-a32f-b54a52ef1459"
2022-06-14T07:59:40.249Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:40Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/arbor@sha256:2baada398b82101a09aaaad34e4c4d397a6be144cf873ee5de2e0c5545e43996': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/arbor@sha256:2baada398b82101a09aaaad34e4c4d397a6be144cf873ee5de2e0c5545e43996"" scan-uuid=f5ca81ce-7122-4c4c-a462-d4244150d649"
2022-06-14T07:59:40.249Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:39Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/isams@sha256:f020d0e90c1f5855ec0f4d03744180c00435fb73c1deb3f850f6c72df9f4e319': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/isams@sha256:f020d0e90c1f5855ec0f4d03744180c00435fb73c1deb3f850f6c72df9f4e319"" scan-uuid=6ea81315-5931-4902-a32f-b54a52ef1459"
2022-06-14T07:59:36.720Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:32Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/schoolbase@sha256:588c680c2287da41e97611d8a676bb84df8fcf2f0b75f02c3293209f03560b4a"" scan-uuid=5e2f7fa2-c027-4e4b-8130-4510a5877173"
2022-06-14T07:59:36.720Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:32Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/schoolbase@sha256:588c680c2287da41e97611d8a676bb84df8fcf2f0b75f02c3293209f03560b4a': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/schoolbase@sha256:588c680c2287da41e97611d8a676bb84df8fcf2f0b75f02c3293209f03560b4a"" scan-uuid=5e2f7fa2-c027-4e4b-8130-4510a5877173"
2022-06-14T07:59:35.034Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:31Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/settings@sha256:045763e3970651d29f012f0462fb3d7ec13aeaed9ecc99a26478ef1eacdb2670': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/settings@sha256:045763e3970651d29f012f0462fb3d7ec13aeaed9ecc99a26478ef1eacdb2670"" scan-uuid=598fcca6-977a-49d6-b508-458e96272fa9"
2022-06-14T07:59:35.034Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:31Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/settings@sha256:045763e3970651d29f012f0462fb3d7ec13aeaed9ecc99a26478ef1eacdb2670"" scan-uuid=598fcca6-977a-49d6-b508-458e96272fa9"
2022-06-14T07:59:32.995Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:30Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/pupilasset@sha256:4f010c9b1eb421b37da12aaea5d11e48c568213b189ddcd0b2137fce028fa16c': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/pupilasset@sha256:4f010c9b1eb421b37da12aaea5d11e48c568213b189ddcd0b2137fce028fa16c"" scan-uuid=7eb56733-1fd2-403f-8d49-9cee84900f7d"
2022-06-14T07:59:32.994Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:31Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/pupilasset@sha256:4f010c9b1eb421b37da12aaea5d11e48c568213b189ddcd0b2137fce028fa16c"" scan-uuid=7eb56733-1fd2-403f-8d49-9cee84900f7d"
2022-06-14T07:59:32.251Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:31Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/messaging@sha256:bba164a0e2017d4b83a703832aba29f392f12e22211e19795c19dec66006ccc8"" scan-uuid=75c5e855-3f53-4071-b33e-c8ca00708102"
2022-06-14T07:59:31.565Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:31Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/engage@sha256:9bc932cb0f61ffce60c257df59a7693bff80ed68e183f2c17c1005a12a9f85df"" scan-uuid=cd396ad2-849b-4083-934a-daec12b73ec7"
2022-06-14T07:59:31.565Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:30Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/engage@sha256:9bc932cb0f61ffce60c257df59a7693bff80ed68e183f2c17c1005a12a9f85df': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/engage@sha256:9bc932cb0f61ffce60c257df59a7693bff80ed68e183f2c17c1005a12a9f85df"" scan-uuid=cd396ad2-849b-4083-934a-daec12b73ec7"
2022-06-14T07:59:31.251Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:30Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/organisation@sha256:db97dafc37a5dc274bd29adbf5fb3bb12cf87b70e352512de01041db48cb65f0"" scan-uuid=2516312f-adc9-4425-9ffa-de5e0e9ab7ba"
2022-06-14T07:59:31.251Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:30Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/messaging@sha256:bba164a0e2017d4b83a703832aba29f392f12e22211e19795c19dec66006ccc8': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/messaging@sha256:bba164a0e2017d4b83a703832aba29f392f12e22211e19795c19dec66006ccc8"" scan-uuid=75c5e855-3f53-4071-b33e-c8ca00708102"
2022-06-14T07:59:30.563Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:29Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/bromcom@sha256:db0b1fa9e601535904cce64db31ee7bad70ef13bbe05e248fdf70dc8a7427fd1"" scan-uuid=badc45c0-7274-4183-999f-d3dc1030f832"
2022-06-14T07:59:30.251Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:30Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/organisation@sha256:db97dafc37a5dc274bd29adbf5fb3bb12cf87b70e352512de01041db48cb65f0': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/organisation@sha256:db97dafc37a5dc274bd29adbf5fb3bb12cf87b70e352512de01041db48cb65f0"" scan-uuid=2516312f-adc9-4425-9ffa-de5e0e9ab7ba"
2022-06-14T07:59:29.563Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:29Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/bromcom@sha256:db0b1fa9e601535904cce64db31ee7bad70ef13bbe05e248fdf70dc8a7427fd1': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/bromcom@sha256:db0b1fa9e601535904cce64db31ee7bad70ef13bbe05e248fdf70dc8a7427fd1"" scan-uuid=badc45c0-7274-4183-999f-d3dc1030f832"
2022-06-14T07:59:27.249Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:21Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/scheduler@sha256:e1316828816000df3712379f3fc70cc0538a816eb1f8c8b7aa50c593ca41c4a8': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/scheduler@sha256:e1316828816000df3712379f3fc70cc0538a816eb1f8c8b7aa50c593ca41c4a8"" scan-uuid=fe9bf708-942f-4ea6-8caf-4548fc15a57a"
2022-06-14T07:59:27.248Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:22Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/scheduler@sha256:e1316828816000df3712379f3fc70cc0538a816eb1f8c8b7aa50c593ca41c4a8"" scan-uuid=fe9bf708-942f-4ea6-8caf-4548fc15a57a"
2022-06-14T07:59:26.734Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:22Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/go4schools@sha256:68dd1adc5e2e61285e0446ea44365f3e89450ab7bc923030144fed7a3157d07f"" scan-uuid=240c53c8-03eb-4b8c-86b6-b165b8ca9045"
2022-06-14T07:59:26.734Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:22Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/go4schools@sha256:68dd1adc5e2e61285e0446ea44365f3e89450ab7bc923030144fed7a3157d07f': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/go4schools@sha256:68dd1adc5e2e61285e0446ea44365f3e89450ab7bc923030144fed7a3157d07f"" scan-uuid=240c53c8-03eb-4b8c-86b6-b165b8ca9045"
2022-06-14T07:59:25.115Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:23Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/wcbs3sysacademic@sha256:ae6931f125b990307ba32b538e464eefc8b84eb3f00fa058e76d4da48545bd9a"" scan-uuid=2808b6c2-39ad-4b75-b949-616cdc1acb91"
2022-06-14T07:59:25.115Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:23Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/wcbs3sysacademic@sha256:ae6931f125b990307ba32b538e464eefc8b84eb3f00fa058e76d4da48545bd9a': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/wcbs3sysacademic@sha256:ae6931f125b990307ba32b538e464eefc8b84eb3f00fa058e76d4da48545bd9a"" scan-uuid=2808b6c2-39ad-4b75-b949-616cdc1acb91"
2022-06-14T07:59:25.113Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:22Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/portal@sha256:df149314f3203096269eab9d866ecc6afa41162215c610393c75c729f055e109': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/portal@sha256:df149314f3203096269eab9d866ecc6afa41162215c610393c75c729f055e109"" scan-uuid=b1b4aab5-0678-40ee-bc1e-fc06f51daa63"
2022-06-14T07:59:25.113Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:23Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/portal@sha256:df149314f3203096269eab9d866ecc6afa41162215c610393c75c729f055e109"" scan-uuid=b1b4aab5-0678-40ee-bc1e-fc06f51daa63"
2022-06-14T07:59:22.562Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:21Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/integrations@sha256:073cf28f65c2965ae82ca684201114bda9be3a6eaed14f1dbf4f6ec9876594c1"" scan-uuid=d451eb53-ed7e-4ce9-b70b-9fa5c04f1f4f"
2022-06-14T07:59:21.563Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:21Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/integrations@sha256:073cf28f65c2965ae82ca684201114bda9be3a6eaed14f1dbf4f6ec9876594c1': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/integrations@sha256:073cf28f65c2965ae82ca684201114bda9be3a6eaed14f1dbf4f6ec9876594c1"" scan-uuid=d451eb53-ed7e-4ce9-b70b-9fa5c04f1f4f"
2022-06-14T07:59:19.562Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:15Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/gateway@sha256:a4bb3bc28e232d9f06d6bedd6858f6c8f6bd69863db937ab0cd551b3e4215610': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/gateway@sha256:a4bb3bc28e232d9f06d6bedd6858f6c8f6bd69863db937ab0cd551b3e4215610"" scan-uuid=4df0102a-eadd-4827-b092-6565bb1f6f47"
2022-06-14T07:59:19.561Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:16Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/gateway@sha256:a4bb3bc28e232d9f06d6bedd6858f6c8f6bd69863db937ab0cd551b3e4215610"" scan-uuid=4df0102a-eadd-4827-b092-6565bb1f6f47"
2022-06-14T07:59:18.006Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:15Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/base/envsubst@sha256:ff2c10b3fc7bbd1297da8fe376312ffafd485938ce043627629f378de839848e': unable determine image source"" image-id=""registrydevelop.azurecr.io/base/envsubst@sha256:ff2c10b3fc7bbd1297da8fe376312ffafd485938ce043627629f378de839848e"" scan-uuid=45974791-38a3-4d42-8763-ffe419bc09de"
2022-06-14T07:59:18.006Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:16Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/base/envsubst@sha256:ff2c10b3fc7bbd1297da8fe376312ffafd485938ce043627629f378de839848e"" scan-uuid=45974791-38a3-4d42-8763-ffe419bc09de"
2022-06-14T07:59:16.045Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:15Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/azureactivedirectory@sha256:674b099218eb2f1bf16823487f86e735d786b544159098c02455d2c270b09a2b"" scan-uuid=8df55fbb-cc34-4b6d-b4f1-04bedaf00601"
2022-06-14T07:59:15.985Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:15Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/clubreg@sha256:3c664da54e2bc95f21ad4754a375706831ddb211c74fe6669eea31c88d9bf95e"" scan-uuid=5d2bd07e-6694-4477-ac2e-6b951c5ffdf2"
2022-06-14T07:59:15.045Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:14Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/azureactivedirectory@sha256:674b099218eb2f1bf16823487f86e735d786b544159098c02455d2c270b09a2b': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/azureactivedirectory@sha256:674b099218eb2f1bf16823487f86e735d786b544159098c02455d2c270b09a2b"" scan-uuid=8df55fbb-cc34-4b6d-b4f1-04bedaf00601"
2022-06-14T07:59:15.044Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:14Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/integris@sha256:33343208dd291f86d8478058f4921753a774acf8fc299fa45c008fe281c46ae6': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/integris@sha256:33343208dd291f86d8478058f4921753a774acf8fc299fa45c008fe281c46ae6"" scan-uuid=6e4a4f13-e130-433e-a57a-b83cf19b6a40"
2022-06-14T07:59:15.044Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:14Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/integris@sha256:33343208dd291f86d8478058f4921753a774acf8fc299fa45c008fe281c46ae6"" scan-uuid=6e4a4f13-e130-433e-a57a-b83cf19b6a40"
2022-06-14T07:59:14.985Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:14Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/clubreg@sha256:3c664da54e2bc95f21ad4754a375706831ddb211c74fe6669eea31c88d9bf95e': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/clubreg@sha256:3c664da54e2bc95f21ad4754a375706831ddb211c74fe6669eea31c88d9bf95e"" scan-uuid=5d2bd07e-6694-4477-ac2e-6b951c5ffdf2"
2022-06-14T07:59:09.250Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:08Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/collectionmonitor@sha256:75b29b52fbd86bb803999e16aa87b18b8e92bdb57fe0fa86c958ae7bc6409f6f"" scan-uuid=a1a5547f-fbe2-48dd-8fc7-7e44c0150cd5"
2022-06-14T07:59:08.722Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:08Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/classmark@sha256:d32b5b6571ddb18a6b6d0e64e644351c118b2e3fda958cd4c348f7082eaf3815': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/classmark@sha256:d32b5b6571ddb18a6b6d0e64e644351c118b2e3fda958cd4c348f7082eaf3815"" scan-uuid=1bd88ae7-3eb6-409d-895a-55a9cf148397"
2022-06-14T07:59:08.722Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:08Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/classmark@sha256:d32b5b6571ddb18a6b6d0e64e644351c118b2e3fda958cd4c348f7082eaf3815"" scan-uuid=1bd88ae7-3eb6-409d-895a-55a9cf148397"
2022-06-14T07:59:08.246Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:08Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/collectionmonitor@sha256:75b29b52fbd86bb803999e16aa87b18b8e92bdb57fe0fa86c958ae7bc6409f6f': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/collectionmonitor@sha256:75b29b52fbd86bb803999e16aa87b18b8e92bdb57fe0fa86c958ae7bc6409f6f"" scan-uuid=a1a5547f-fbe2-48dd-8fc7-7e44c0150cd5"
2022-06-14T07:59:06.980Z,"""kubeclarity-runtime-k8s-scanner""","""dev""","time=""2022-06-14T07:59:06Z"" level=error msg=""failed to analyze image: failed to run job manager: failed to run job: failed to create source analyzer=syft: could not fetch image 'registrydevelop.azurecr.io/application/authentication@sha256:1e1f5da819164a865e6708c0c83a75659afd0fdb0488baee0d3bb77f0fa1bcd3': unable determine image source"" image-id=""registrydevelop.azurecr.io/application/authentication@sha256:1e1f5da819164a865e6708c0c83a75659afd0fdb0488baee0d3bb77f0fa1bcd3"" scan-uuid=2cd91854-7de0-4873-8898-c1043e2b4c4a"
2022-06-14T07:59:06.979Z,"""kubeclarity-cis-docker-benchmark-scanner""","""dev""","time=""2022-06-14T07:59:06Z"" level=error msg=""failed to run dockle: unable to initialize a image struct: failed to initialize source: unable to retrieve auth token: invalid username/password: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.. error create docker extractor"" image-id=""registrydevelop.azurecr.io/application/authentication@sha256:1e1f5da819164a865e6708c0c83a75659afd0fdb0488baee0d3bb77f0fa1bcd3"" scan-uuid=2cd91854-7de0-4873-8898-c1043e2b4c4a"

Environment:

  • Kubernetes version: v1.23.3
  • KubeClarity version:
    Version: v2.3.0
    Commit: 6f8d28b
    Build Time: 2022-05-24T09:18:38Z
  • Cloud provider or hardware configuration: Azure

Job create failed because resource limits and request

When i click GO button on the kubei webui, all scan job can't create in namespaces those with resource quota.

[root@devopsmasteruat01uat kubei]# kubectl describe jobs -n ns-78b43bbffdd2         klar-scanner-php02-3a52d-1a2a9743-e268-4a5f-8f36-5f67e588628d
Name:           klar-scanner-php02-3a52d-1a2a9743-e268-4a5f-8f36-5f67e588628d
Namespace:      ns-78b43bbffdd2
Selector:       controller-uid=6a99f2b9-ac0f-4c19-a98a-d54c1cf16a76
Labels:         app=klar-scanner
                kubeiShouldScan=false
Annotations:    <none>
Parallelism:    1
Completions:    1
Pods Statuses:  0 Running / 0 Succeeded / 0 Failed
Pod Template:
  Labels:       app=klar-scanner
                controller-uid=6a99f2b9-ac0f-4c19-a98a-d54c1cf16a76
                job-name=klar-scanner-php02-3a52d-1a2a9743-e268-4a5f-8f36-5f67e588628d
                kubeiShouldScan=false
  Annotations:  sidecar.istio.io/inject: false
                sidecar.portshift.io/inject: false
  Containers:
   klar-scanner:
    Image:      registry.connextpaas.com/development-infra-208909/klar:1.0.1
    Port:       <none>
    Host Port:  <none>
    Args:
      cmp-registry.lorealparis.com.cn/library/php02-3a52d:v3
    Environment:
      CLAIR_ADDR:           clair.kubei
      CLAIR_OUTPUT:         MEDIUM
      KLAR_TRACE:           false
      RESULT_SERVICE_PATH:  http://kubei.kubei:8081/result/
      SCAN_UUID:            380f9f10-06d6-462f-a4e5-b109a1364469
    Mounts:                 <none>
  Volumes:                  <none>
Events:
  Type     Reason        Age                  From            Message
  ----     ------        ----                 ----            -------
  Warning  FailedCreate  44m                  job-controller  Error creating: pods "klar-scanner-php02-3a52d-1a2a9743-e268-4a5f-8f36-5f67e5886j5xvs" is forbidden: failed quota: ns-78b43bbffdd2: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
  Warning  FailedCreate  44m                  job-controller  Error creating: pods "klar-scanner-php02-3a52d-1a2a9743-e268-4a5f-8f36-5f67e5886cjhmb" is forbidden: failed quota: ns-78b43bbffdd2: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
  Warning  FailedCreate  44m                  job-controller  Error creating: pods "klar-scanner-php02-3a52d-1a2a9743-e268-4a5f-8f36-5f67e58865dfxp" is forbidden: failed quota: ns-78b43bbffdd2: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
  Warning  FailedCreate  43m                  job-controller  Error creating: pods "klar-scanner-php02-3a52d-1a2a9743-e268-4a5f-8f36-5f67e5886pmgwd" is forbidden: failed quota: ns-78b43bbffdd2: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
  Warning  FailedCreate  42m                  job-controller  Error creating: pods "klar-scanner-php02-3a52d-1a2a9743-e268-4a5f-8f36-5f67e5886tp9jm" is forbidden: failed quota: ns-78b43bbffdd2: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
  Warning  FailedCreate  39m                  job-controller  Error creating: pods "klar-scanner-php02-3a52d-1a2a9743-e268-4a5f-8f36-5f67e5886pbgz5" is forbidden: failed quota: ns-78b43bbffdd2: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
  Warning  FailedCreate  34m                  job-controller  Error creating: pods "klar-scanner-php02-3a52d-1a2a9743-e268-4a5f-8f36-5f67e588678njk" is forbidden: failed quota: ns-78b43bbffdd2: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
  Warning  FailedCreate  28m                  job-controller  Error creating: pods "klar-scanner-php02-3a52d-1a2a9743-e268-4a5f-8f36-5f67e5886hl2wp" is forbidden: failed quota: ns-78b43bbffdd2: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
  Warning  FailedCreate  22m                  job-controller  Error creating: pods "klar-scanner-php02-3a52d-1a2a9743-e268-4a5f-8f36-5f67e5886zsljj" is forbidden: failed quota: ns-78b43bbffdd2: must specify limits.cpu,limits.memory,requests.cpu,requests.memory
  Warning  FailedCreate  4m10s (x3 over 16m)  job-controller  (combined from similar events): Error creating: pods "klar-scanner-php02-3a52d-1a2a9743-e268-4a5f-8f36-5f67e58869k6dz" is forbidden: failed quota: ns-78b43bbffdd2: must specify limits.cpu,limits.memory,requests.cpu,requests.memory

Failing to deploy.

When I attempt to apply this image, it never starts- and there is an error within the postgres pod.

This is the output from the "kubectl -n kubei get events" command:

10s         Warning   InspectFailed       pod/clair-postgres-6775b5bdc6-j2pml    Failed to inspect image "gcr.io/portshift-release/clair/clair-db": rpc error: code = Unknown desc = Error response from daemon: readlink /var/lib/docker/overlay2/l: invalid argument
10s         Warning   Failed              pod/clair-postgres-6775b5bdc6-j2pml    Error: ImageInspectError

I would guess that something in that image or app failed.

Critical and high vulnerabilities found in deployment images

I saw the kubeiShouldScan flag on some of the deployment resources so I decided to scan all images used to deploy kubei, using Google Container Registry scanner. A lot of critical, high and medium vulnerabilities were found:

  • postgres:9.6.5: 5 critical, 22 high, 58 medium
  • clair-db: 3 high
  • clair-local-scan: 9 high

Kubei looks interesting as a solution but isn't it a bit ironic that a vulnerability scanner solution uses vulnerable images as part of its deployment? ๐Ÿคทโ€โ™‚๏ธ ๐Ÿ˜…

i want to scan pods just in local host.

if i want to scan pods just in local host, i donot want the kubei to pull the images throught the http or https protocol ,is there a param to set and make function?

I hava some images which do not exist in the harb repository,it comes from third repository,but my Internet cannot connect to it,so i want the kubei just scan in the localhost.

Kubei not running

We are running kubei behind a proxy.
I have set everything up as per the readme.

However i am not able to reach the WebUI

$ kubectl -n kubei get pod -lapp=kubei NAME READY STATUS RESTARTS AGE kubei-78887848d5-pfsnf 1/1 Running 0 3m39s

Grype Server Logs
2021/12/20 13:42:43 Serving grype server at http://[::]:9991

Kubei Logs
time="2021-12-20T14:40:35Z" level=info msg="Webapp is running" time="2021-12-20T14:40:35Z" level=info msg="Starting Orchestrator server"

OOM Killed Scanned pod job

Target namespace one of deployment scanned image pod job getting killed due to OOM resulting scan failure (succeeded state as false)

Please help how to increase the memory resources on scanned pod before the scan

Is it possible to scan ECR using kube2iam?

I liked the project and will try it out tomorrow but I am wondering if anyone got it working with kube2iam. I cannot create users and get access key and secret to setup on k8s. I am coming from starboard as it seemed to be a good option but it relies on IRSA which I don't have it configured right now and couldn't make it work with kube2iam

SBOM db image requires authentication for pulling

What happened:

The SBOM db image seems to require authentication to pull.

$ docker pull ghcr.io/cisco-open/kubeclarity-sbom-db:v2.0.0
Error response from daemon: Head https://ghcr.io/v2/cisco-open/kubeclarity-sbom-db/manifests/v2.0.0: unauthorized

What you expected to happen:

It should not require authentication.

How to reproduce it (as minimally and precisely as possible):

$ docker pull ghcr.io/cisco-open/kubeclarity-sbom-db:v2.0.0

Azure Container Registry

Hello,

I want to scan Azure Container Registry images used in the AKS cluster. Is there any way to integrate kubei with the ACR?

API Available?

I'd like to retrieve kubei run statistics from an external tool. Is there an API available?

For example to retrieve the number of vulnerabilities for a particular severity?

How can I achieve this from an external tool?

Vulnerability acknlowdgement

Is your feature request related to a problem? Please describe.
Currently it's hard to see when any new vulnerabilities appear and most of the time the vulnerabilities that kubeclarity shows do not apply to the environment when looking through a specific vulnerability.

Describe the solution you'd like
It would be great if there was a way to hide the vulenrabilities you have already gone through for specific applications.

Describe alternatives you've considered
None

Additional context

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.