Code Monkey home page Code Monkey logo

stackrox / stackrox Goto Github PK

View Code? Open in Web Editor NEW
1.1K 27.0 133.0 197.61 MB

The StackRox Kubernetes Security Platform performs a risk analysis of the container environment, delivers visibility and runtime alerts, and provides recommendations to proactively improve security by hardening the environment.

License: Apache License 2.0

Shell 3.01% Smarty 0.52% Makefile 0.34% Go 65.30% Dockerfile 0.16% Python 0.42% Groovy 4.37% Java 0.01% JavaScript 9.32% HTML 0.01% TypeScript 16.28% CSS 0.23% Tcl 0.04% XSLT 0.01% C 0.01%
containers hacktoberfest k8s kubernetes security

stackrox's Issues

central crashes at startup with quay.io/stackrox-io/main:3.72.1 (also: 3.72.0)

Hello,

the central pod crashes while starting up:

cve/fetcher: 2022/10/20 09:34:33.286511 orchestrator.go:237: Info: Successfully fetched 0 OpenShift CVEs
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x33b2efd]

goroutine 117 [running]:
github.com/stackrox/rox/central/cve/converter/utils.nvdCvssv2ToProtoCvssv2(0x0)
	github.com/stackrox/rox/central/cve/converter/utils/convert_utils.go:173 +0x1d

This was an installation with helm stackrox-central-services-71.0.0, upgraded to stackrox-central-services-72.0.0, which then started crashing some time (less than 1 day) after the upgrade. Still crashing with 72.1.0.

Values for helm look like this:

image:
  registry: <proxy for quay.io>/stackrox-io

env:
  proxyConfig: |
    url: http://...
    excludes:
    - ...

central:
  exposure:
     loadBalancer:
       enabled: true

scanner:
  autoscaling:
    disable: true

This is the full log for central:
crash.log

CVE Scanning for node not possible

Hi,

OCP Version - Central: 4.9.33
OCP Version - Secured Cluster: 4.10.20, 4.9.33, 4.8.19
Stackrox Version: 3.71.0

we are deploying Stackrox on OCP4. The Vulnerability Management Details page for a node of a secured cluster does not show any CVE Data, instead I can only see a message

CVE DATA MAY BE INACCURATE
Node unsupported.
Scanning this node is not supported at this time. Please see the release notes for more information.

Question 1: where can I find the release notes?
Question 2: is there a roadmap for new features? E.g. I saw #2588 for example and am wondering when to expect this in a versioned release
Question 3: how can I enable scans for CoreOS nodes?

image

how stackrox detect CVE-2020-8561?

Could you please tell me how technically stackrox is looking for the vulnerability CVE-2020-8561?
vulnerability link https://groups.google.com/g/kubernetes-security-announce/c/RV2IhwcrQsY

the fact is that in our cloud provider "yandex cloud" it is technically impossible to successfully redirect according to the description of the vulnerability, but stackrox shows that the vulnerability is valid
so I want to understand how technically you are looking for it? Where can i see it in the code?

Differences between Stackrox open source and the enterprise version

Hi!

I was trying to get more info about the differences between this project and the enterprise version.

How should the deployment be done for this project. stackrox.io/main:3.70.0 requires auth to the registry.

Moreover, are any of the features dropped in the open source version?

Is there on the roadmap to have public container images?

Many thanks!

Scanner build-updater generate-dump pulls from private gcs bucket

When executing scanner build-updater generate-dump as part of the build instructions, it attempts to pull data from the stackrox-scanner-feed gcs bucket.

./bin/updater generate-dump --out-file image/scanner/dump/dump.zip

Which results in the following error:

ERRO[0050] an error occurred when fetching update        error="StackRox updater cannot be run without a service account" updater name=stackrox

Though I did not supply a gsa, I would think I'd need to request access to this bucket if I did. Would it make sense to use a publicly accessible endpoint instead? Let me know if I'm completely off base here.

Enable gosec rules

Currently we have only one (1) gosec rule enabled in golangci-lint config.

stackrox/.golangci.yml

Lines 51 to 53 in 334e6b7

gosec:
includes:
- G601

Ideally we should enable all of them. Every PR should fix one rule. There is a chance that some rules are already fixed and we only need to enable them. After including new rule, please ensure make golangci-lint is passing if there are errors please fix them.

  • G101: Look for hard coded credentials #3566
  • G102: Bind to all interfaces #3567
  • G103: Audit the use of unsafe block #3568
  • G104: Audit errors not checked #3936
  • G106: Audit the use of ssh.InsecureIgnoreHostKey #3677
  • G107: Url provided to HTTP request as taint input
  • G108: Profiling endpoint automatically exposed on /debug/pprof #3677
  • G109: Potential Integer overflow made by strconv.Atoi result conversion to int16/32 #3677
  • G110: Potential DoS vulnerability via decompression bomb
  • G111: Potential directory traversal #3629
  • G112: Potential slowloris attack
  • G113: Usage of Rat.SetString in math/big with an overflow (CVE-2022-23772) #3631
  • G114: Use of net/http serve function that has no support for setting timeouts
  • G201: SQL query construction using format string #3677
  • G202: SQL query construction using string concatenation #3677
  • G203: Use of unescaped data in HTML templates #3677
  • G204: Audit use of command execution
  • G301: Poor file permissions used when creating a directory
  • G302: Poor file permissions used with chmod
  • G303: Creating tempfile using a predictable path #3560
  • G304: File path provided as taint input
  • G305: File traversal when extracting zip/tar archive
  • G306: Poor file permissions used when writing to a new file
  • G307: Deferring a method which returns an error #3677
  • G401: Detect the usage of DES, RC4, MD5 or SHA1
  • G402: Look for bad TLS connection settings
  • G403: Ensure minimum RSA key length of 2048 bits #3677
  • G404: Insecure random number source (rand)
  • G501: Import blocklist: crypto/md5
  • G502: Import blocklist: crypto/des #3677
  • G503: Import blocklist: crypto/rc4 #3677
  • G504: Import blocklist: net/http/cgi #3677
  • G505: Import blocklist: crypto/sha1
  • G601: Implicit memory aliasing of items from a range statement

roxctl generate netpol adds`status: {}` metadata. Breaks kubectl apply command.

The generated network policy YAML files contain the status: {} metadata at the bottom of the files.

When using the kubectl apply -f command, it generates a error: error validating "FILE": error validating data: ValidationError(NetworkPolicy): unknown field "status" in io.k8s.api.networking.v1.NetworkPolicy; if you choose to ignore these errors, turn validation off with --validate=false

I can either delete the metadata or use --validate=false to get around it, but it is a minor annoyance.

kubectl version: WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.2", GitCommit:"5835544ca568b757a8ecae5c153f317e5736700e", GitTreeState:"clean", BuildDate:"2022-09-21T14:25:45Z", GoVersion:"go1.19.1", Compiler:"gc", Platform:"darwin/amd64"} Kustomize Version: v4.5.7 Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5+3afdacb", GitCommit:"3c28e7a79b58e78b4c1dc1ab7e5f6c6c2d3aedd3", GitTreeState:"clean", BuildDate:"2022-05-10T16:30:48Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd64"}

Add linter check/support for SPDX headers

Goal: Have a linter check that fails if any given source file does not contain an SDPX header

We should have SPDX headers in all of our source files in all repositories that we want to open source.
Example:

// Copyright Red Hat [or: Copyright StackRox Authors [or similar]]
// SPDX-License-Identifier: Apache-2.0

The task is to create a linter check that looks for these headers and fails if they are not present.
Our custom linters are called from the main function in tools/roxvet/roxvet.go and lie in the tools/analyzers repository, which should have plenty of examples.

Internal link: https://issues.redhat.com/browse/ROX-9267

Fix shell scripts to pass shellcheck linter

In scripts/style/shellcheck_skip.txt we have a list of not linted files. The goal of this issue is to get them fixed to pass shellcheck. To run shellcheck use make shell-style. Ideally every PR should fix one file:

Provide a way to easily build collector drivers

Currently, the collector component of stackrox uses a kernel module or eBPF probe (referred to as drivers from this point on), in order to gather information on running processes, network connections, etc.

Because these drivers need to be build for a specific kernel, members of the community would either need to:

  • Run stackrox in one of the platforms we support (we do support most major distributions as well as cloud providers).
  • Run stackrox with collector disabled/in a crashloop, missing some of the stackrox functionality.
  • Build their own driver and supply it to stackrox at runtime.

The last point is the subject of this issue, I believe it would be a nice addition to have a user friendly way for community members to compile their own drivers. Some of the potential solutions for this I can think of:

  • Provide a docker image that could be run with the kernel headers and collector code mounted on it, leaving the complied drivers in the host. This could further be improved with a make target to not only compile the drivers, but also tag a collector image with those drivers embedded in it and ready to be used in a local deployment. Alternatively we could come up with a way to create a support package with those same drivers that could be uploaded to central through roxctl.
  • Create a way for users to add kernels to be compiled and distributed through channels similar to how we distribute our supported drivers. This is similar to how Falco maintains their community drivers, but it would incur in extra expenses and effort for stackrox to support and maintain said system.

There are of course some extra wrinkles that might need to be ironed out, for instance some kubernetes test tools provide their own VM with the kernel headers for it distributed as a layer in a separate image, but I think having a simple approach (even if somewhat clunky initially) could encourage the community to build and improve upon it.

Nil pointer ref in nvdCvssv2ToProtoCvssv2

Hello! We were trying to get stackrox set up on OpenShift and ended up with crash loop backoff for the central pod with a nil pointer ref:

cve/fetcher: 2022/10/20 17:11:29.292137 manager_impl.go:55: Info: successfully copied preloaded CVE istio files to persistent volume: "/var/lib/stackrox/cve/istio"
cve/fetcher: 2022/10/20 17:11:29.292242 orchestrator.go:62: Info: Found 0 clusters to scan for orchestrator vulnerabilities.
cve/fetcher: 2022/10/20 17:11:29.293043 orchestrator.go:237: Info: Successfully fetched 0 Kubernetes CVEs
cve/fetcher: 2022/10/20 17:11:29.293278 orchestrator.go:237: Info: Successfully fetched 0 OpenShift CVEs
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x33d357d]
goroutine 140 [running]:
github.com/stackrox/rox/central/cve/converter/utils.nvdCvssv2ToProtoCvssv2(0x0)
github.com/stackrox/rox/central/cve/converter/utils/convert_utils.go:173 +0x1d
github.com/stackrox/rox/central/cve/converter/utils.NvdCVEToEmbeddedCVE(0xc00fcc4f40, 0x1)
github.com/stackrox/rox/central/cve/converter/utils/convert_utils.go:128 +0xd1
github.com/stackrox/rox/central/cve/converter/utils.NvdCVEsToEmbeddedCVEs({0xc00f5f4210, 0x16, 0x0?}, 0x15117?)
github.com/stackrox/rox/central/cve/converter/utils/convert_utils.go:219 +0x97
github.com/stackrox/rox/central/cve/fetcher.(*istioCVEManager).updateCVEs(0xc00090e300?, {0xc00f5f4210, 0x16, 0x16})
github.com/stackrox/rox/central/cve/fetcher/istio.go:71 +0x45
github.com/stackrox/rox/central/cve/fetcher.(*istioCVEManager).initialize(0xc002d7d030)
github.com/stackrox/rox/central/cve/fetcher/istio.go:46 +0xc5
github.com/stackrox/rox/central/cve/fetcher.(*orchestratorIstioCVEManagerImpl).initialize(0xc00f5ca8c0)
github.com/stackrox/rox/central/cve/fetcher/manager_impl.go:58 +0x2ba
github.com/stackrox/rox/central/cve/fetcher.NewOrchestratorIstioCVEManagerImpl({0x722b3e0?, 0xc0095e4000}, {0x0?, 0x0}, {0x7221d20?, 0xc002d7cf50}, {0x721c1c8?, 0xc00f5c5d00}, 0xc00f5db200)
github.com/stackrox/rox/central/cve/fetcher/manager.go:72 +0x372

There was likely something wrong with our config or setup, but figured yall would want to know about a panic. To me it looks the problem is that the pointer to the BaseMetricV2 passed into the method that panicked was nil. The problem may be here?

if nvdCVE.Impact != nil {
		cvssv2, err := nvdCvssv2ToProtoCvssv2(nvdCVE.Impact.BaseMetricV2)

This was on a new setup. Central was installed but Secure Cluster wasn't set up yet. Tagging in @xxlhacker because he did the setup and may know more than me!

Local Image Scan in Build Phase

Greetings!
I want to thank all the contributors to the project for the excellent work!
I have a question:
How can the image be scanned and/or checked for consistency locally in the pipeline during the build phase?

I would like to scan the newly created image in ci/cd pipeline without using registry publishing.

Are there plans to add this functionality avoiding the use of plugins for jenkins, etc.?

I would suggest this approach:
Create a "RoxScannerCLI" binary with the following functional commands:
RoxScannerCLI [command].

  1. Scan - scan local image by name/tag/assembly
  2. Check - check if the image corresponds to the compliance policies configured in StackRox.
  3. Daemon - continuous image scanner.
  4. Help - help:)

Flags:

  1. --StackRoxServer - StackRox server address, get policies from it, upload scan results there and agree with it to stop/continue pipelining.
  2. --ApiToken - well, it is clear here:)
  3. -no-verify - do not validate certificate

Why?
This method will allow to prevent vulnerable, non-compliant images from being published to the repository at a very early stage (at the build stage). I think this fits perfectly into the shift left paradigm.

Duplicate Deployments in Dropdown in Policy Exclusion Scope UI

https://cloud-native.slack.com/archives/C01TDE3GK0E/p1660665385988579

When adding a policy exclusion scope, the dropdown for Deployment is unordered and has duplicates; additionally, there's no way to type to search the list or to enter a deployment that doesn't currently exist (might be ephemeral or an expected deployment you want to apply policy to before it's deployed).

It appears that this is taking the entire output of the DeploymentsService API endpoint (/v1/deployments) and populating the dropdown with that list. Recommendation to improve this experience:

  1. Apply UNIQUE to the array of deployments. (Matching in the policy is done by policy name and not by something like GUID [as done for clusters] so this should have no practical impact on functionality.)
  2. Sort in alphabetical order.
  3. If a cluster is selected in the scope, filter the list by deployments in that cluster rather than listing all deployments.
  4. If a namespace is selected, filter the list by the namespace selected.
  5. Use a field that allows for typing to search the list and/or enter a value that doesn't currently exist instead of the current drop-down style

A better way to install stackrox operator without OLM

Today stackrox you have a great way of installing the stackrox operator through OLM.
But when trying to install the operator through kubernetes it's not as easy.

As you mentioned on the first community meeting everyone knows how to use helm charts to it probably would be a good way to solve this problem.

The bad thing is of course that we would have to maintain yet another helm chart.

Stackrox-chart is already taken by redhat-cop and is a helm chart to install the operator through OLM in a gitops scenario:
https://artifacthub.io/packages/helm/redhat-cop/stackrox-chart
https://github.com/redhat-cop/helm-charts/tree/master/charts/stackrox

But a good name could be as simple as stackrox-operator.

Why?
To be able to grow the community outside of Openshift we need to provide the kubernetes users with a good way of installing the operator.

Feature Request: Allow exemptions to Exempt by Username/Groups

Currently you can only create policies that exempt particular usernames/groups from taking a particular action by modifying criteria (and generally duplicating policies)(source). This is non-ideal for two reasons. First it's a bit of an anti-pattern to create different criteria to do exemptions. Second with some policies being uneditable you'd have to duplicate policies which could lead to drift in the detection logic that would be undesired.

Ideally the exclusions logic would support something that looks like this:

    {
      "name": "Don't Alert for blah blah blah",
      "deployment": {
        "name": "",
        "scope": {
          "cluster": "",
          "namespace": "action_expected",
          "label": null
        },
        "actor": {
           "username" : "some username or username regex",
           "groups" : "some group or group regex"
        }
      },
      "image": null,
      "expiration": null
    },

Refresh SAML 2.0 metadata when dynamic confuguration is chosen

Description

In SAML 2.0 type of authentication provider, we allow users to choose either dynamic configuration or static configuration.

Static configuration requires user to manually input IdP Issuer, IdP SSO URL and IdP Certificate(s) (PEM). Dynamic configuration allows Central to automatically obtain this data from IdP Metadata URL.

At the moment when the dynamic configuration is chosen, we only call IdP Metadata URL once - at the creation of the authentication provider. This makes it necessary to re-create/manually update SAML 2.0 auth provider if IdP Issuer, IdP SSO URL and IdP Certificate(s) (PEM) are changed.

This issue suggests periodically calling IdP Metadata URL to refresh SAML 2.0 auth provider configuration. The call should be done with a reasonable interval so we won't congest the network from one hand, but also refresh values often enough so that user won't get failed login attempts. I suggest 5 minutes as the interval.

Code references

  1. Code calling IdP Metadata URL
    func configureIDPFromMetadataURL(ctx context.Context, sp *saml2.SAMLServiceProvider, metadataURL string) error {
    entityID, descriptor, err := fetchIDPMetadata(ctx, metadataURL)
    if err != nil {
    return errors.Wrap(err, "fetching IdP metadata")
    }
    sp.IdentityProviderIssuer = entityID
    return configureIDPFromDescriptor(sp, descriptor)
    }
    func fetchIDPMetadata(ctx context.Context, url string) (string, *types.IDPSSODescriptor, error) {
    request, err := http.NewRequest(http.MethodGet, url, nil)
    if err != nil {
    return "", nil, errors.Wrap(err, "could not create HTTP request")
    }
    httpClient := http.DefaultClient
    if stringutils.ConsumeSuffix(&request.URL.Scheme, "+insecure") {
    httpClient = insecureHTTPClient
    }
    resp, err := httpClient.Do(request.WithContext(ctx))
    if err != nil {
    return "", nil, errors.Wrap(err, "fetching metadata")
    }
    defer func() {
    _ = resp.Body.Close()
    }()
    var descriptors entityDescriptors
    if err := xml.NewDecoder(resp.Body).Decode(&descriptors); err != nil {
    return "", nil, errors.Wrap(err, "parsing metadata XML")
    }
    if len(descriptors) != 1 {
    return "", nil, errors.Errorf("invalid number of entity descriptors in metadata response: expected exactly one, got %d", len(descriptors))
    }
    desc := descriptors[0]
    if desc.IDPSSODescriptor == nil {
    return "", nil, errors.New("metadata contains no IdP SSO descriptor")
    }
    if !desc.ValidUntil.IsZero() && !desc.ValidUntil.After(time.Now()) {
    return "", nil, fmt.Errorf("IdP metadata has expired at %v", desc.ValidUntil)
    }
    return desc.EntityID, desc.IDPSSODescriptor, nil
    }
  2. Backend for SAML 2.0 authentication provider https://github.com/stackrox/stackrox/blob/8ca46e7fe19afee5d07e76eef118b03a6329b0f1/pkg/auth/authproviders/saml/backend_impl.go
  3. Potential place to insert refresh in code(before constructing login URL):
    func (p *backendImpl) loginURL(clientState string) (string, error) {
    doc, err := p.sp.BuildAuthRequestDocument()
    if err != nil {
    return "", errors.Wrap(err, "could not construct auth request")
    }
    authURL, err := p.sp.BuildAuthURLRedirect(idputil.MakeState(p.id, clientState), doc)
    if err != nil {
    return "", errors.Wrap(err, "could not construct auth URL")
    }
    return authURL, nil
    }

Note: https://github.com/stackrox/stackrox/blob/8ca46e7fe19afee5d07e76eef118b03a6329b0f1/pkg/auth/authproviders/saml/backend_impl.go can be used concurrently. Refresh should occur only when no other users are trying to log in - this can be achieved by adding lock to the backendImpl struct.

Manage stackrox internal resources through operator

Today the operator solves the basic issue of setting up stackrox it self which is great.
But as I think most of us can agree on we need more :).

I would like to configure most resources that exist inside stackrox through CRD:s.
This will enable me to configure things like access to container registries using gitops instead of manually having to go in and klick in the UI.

This will also help a lot when having big environments with multiple clusters.

I know this is a big feature request and it's probably better to split up into smaller issues but at least I wanted to start the discussion about this functionality.

Question about passwords hashing into RocksDB

Hello !

I would like to know which algorithm is used to store users passwords (i.e. admin) into the RocksDB.

It seems you need to use bcrypt when using the current helm chart and force admin password but password stored can use another hashing algorithm.

Thank you in advance

~question

Add Rocky And Alma Linux Scanner Support

Hello!
I bet there are a few who'd like this, it'd be great to support Rocky and Alma Linux (or at least partially). Specifically to have these image releases and there components identified by the scanner.

Retry transient download failures in operator build process (ROX-12397)

Example curl failure:

$ make -C operator kuttl
make: Entering directory '/go/src/github.com/stackrox/stackrox/operator'
push_pin
+ mkdir -p bin
+ curl --fail --location --output /go/src/github.com/stackrox/stackrox/operator//bin/kubectl-kuttl-0.11.0-verbose-resource https://github.com/porridge/kuttl/releases/download/v0.11.0-verbose-resource/kubectl-kuttl_0.11.0-verbose-resource_linux_x86_64
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0

 72 43.8M   72 31.5M    0     0   138M      0 --:--:-- --:--:-- --:--:--  138M
curl: (56) OpenSSL SSL_read: SSL_ERROR_SYSCALL, errno 104
make: *** [Makefile:196: kuttl] Error 56
make: Leaving directory '/go/src/github.com/stackrox/stackrox/operator'

It would be great to try and add a shell retry loop around the curl invocation to retry such transient issues.

[Collector] Segmentation Fault on all nodes in OpenShift 4.9.33

Hi,

Disclaimer: I have opened the same issue at stackrox/collector#838 because I am not sure on which repository this should be tracked as here we have a area/collector label. Please close the one which is at the wrong location.

we are experiencing crashes in collector containers across all nodes in one of our OpenShift clusters.

Debug Log:

Collector Version: 3.9.0
OS: Red Hat Enterprise Linux CoreOS 49.84.202205050701-0 (Ootpa)
Kernel Version: 4.18.0-305.45.1.el8_4.x86_64
Starting StackRox Collector...
[I 20220926 112218 HostInfo.cpp:126] Hostname: '<redacted>'
[I 20220926 112218 CollectorConfig.cpp:119] User configured logLevel=debug
[I 20220926 112218 CollectorConfig.cpp:149] User configured collection-method=kernel_module
[I 20220926 112218 CollectorConfig.cpp:206] Afterglow is enabled
[D 20220926 112218 HostInfo.cpp:200] EFI directory exist, UEFI boot mode
[D 20220926 112218 HostInfo.h:100] identified kernel release: '4.18.0-305.45.1.el8_4.x86_64'
[D 20220926 112218 HostInfo.h:101] identified kernel version: '#1 SMP Wed Apr 6 13:48:37 EDT 2022'
[D 20220926 112218 HostInfo.cpp:297] SecureBoot status is 2
[D 20220926 112218 collector.cpp:254] Core dump not enabled
[I 20220926 112218 collector.cpp:302] Module version: 2.0.1
[I 20220926 112218 collector.cpp:329] Attempting to download kernel module - Candidate kernel versions:
[I 20220926 112218 collector.cpp:331] 4.18.0-305.45.1.el8_4.x86_64
[D 20220926 112218 GetKernelObject.cpp:148] Checking for existence of /kernel-modules/collector-4.18.0-305.45.1.el8_4.x86_64.ko.gz and /kernel-modules/collector-4.18.0-305.45.1.el8_4.x86_64.ko
[D 20220926 112218 GetKernelObject.cpp:151] Found existing compressed kernel object.
[I 20220926 112218 collector.cpp:262]
[I 20220926 112218 collector.cpp:263] This product uses kernel module and ebpf subcomponents licensed under the GNU
[I 20220926 112218 collector.cpp:264] GENERAL PURPOSE LICENSE Version 2 outlined in the /kernel-modules/LICENSE file.
[I 20220926 112218 collector.cpp:265] Source code for the kernel module and ebpf subcomponents is available upon
[I 20220926 112218 collector.cpp:266] request by contacting [email protected].
[I 20220926 112218 collector.cpp:267]
[I 20220926 112218 collector.cpp:162] Inserting kernel module /module/collector.ko with indefinite removal and retry if required.
[D 20220926 112218 collector.cpp:109] Kernel module arguments: s_syscallIds=26,27,56,57,246,247,248,249,94,95,14,15,156,157,216,217,222,223,4,5,22,23,12,13,154,155,172,173,214,215,230,231,282,283,288,289,292,293,96,97,182,183,218,219,224,225,16,186,234,194,195,192,193,200,201,198,199,36,37,18,19,184,185,220,221,226,227,-1 verbose=0 exclude_selfns=1 exclude_initns=1
[I 20220926 112218 collector.cpp:183] Done inserting kernel module /module/collector.ko.
[I 20220926 112218 collector.cpp:215] gRPC server=sensor.mcs-security.svc:443
[I 20220926 112218 CollectorService.cpp:50] Config: collection_method:kernel_module, useChiselCache:1, snapLen:0, scrape_interval:30, turn_off_scrape:0, hostname:<redacted>, logLevel:DEBUG
[I 20220926 112218 CollectorService.cpp:79] Network scrape interval set to 30 seconds
[I 20220926 112218 CollectorService.cpp:82] Waiting for GRPC server to become ready ...
[I 20220926 112218 CollectorService.cpp:87] GRPC server connectivity is successful
[D 20220926 112218 ConnTracker.cpp:314] ignored l4 protocol and port pairs
[D 20220926 112218 ConnTracker.cpp:316] udp/9
[I 20220926 112218 NetworkStatusNotifier.cpp:187] Started network status notifier.
[I 20220926 112218 NetworkStatusNotifier.cpp:203] Established network connection info stream.
[D 20220926 112218 SysdigService.cpp:262] Updating chisel and flushing chisel cache
[D 20220926 112218 SysdigService.cpp:263] New chisel:
args = {}
function on_event()
    return true
end
function on_init()
    filter = "not container.id = 'host'\n"
    chisel.set_filter(filter)
    return true
end

[I 20220926 112218 SignalServiceClient.cpp:43] Trying to establish GRPC stream for signals ...
[I 20220926 112218 SignalServiceClient.cpp:61] Successfully established GRPC stream for signals.
[D 20220926 112219 ConnScraper.cpp:406] Could not open process directory 1626873: No such file or directory
[D 20220926 112219 ConnScraper.cpp:406] Could not open process directory 1626877: No such file or directory
[W 20220926 112219 ProtoAllocator.h:41] Allocating a memory block on the heap for the arena, this is inefficient and usually avoidable
collector[0x44746d]
/lib64/libc.so.6(+0x4eb20)[0x7f8425ceeb20]
Caught signal 11 (SIGSEGV): Segmentation fault
/bootstrap.sh: line 94:    11 Segmentation fault      (core dumped) eval exec "$@"
Collector kernel module has already been loaded.
Removing so that collector can insert it at startup.

I am not sure how to debug this as all daemonSet containers experience this problem.

We are using StackRox 3.71.0. I have tried with collector images 3.9.0 and 3.11.0. Please reach out for any missing information.

`roxctl central generate interactive` falsely expects a registry

Following the defined process for a manual central install will result in a setup.sh script that prompts for docker credentials even though opensource images are used.
Ideally, roxctl should not prompt for registry credentials when it is used to generate an installer for the opensource flavor.
A good starting point for the investigation could be https://github.com/stackrox/stackrox/blob/master/roxctl/central/generate/interactive.go

$ roxctl central generate interactive
Enter path to the backup bundle from which to restore keys and certificates (optional):
Enter PEM cert bundle file (optional):
Enter Create PodSecurityPolicy resources (for pre-v1.25 Kubernetes) (default: "true"):
Enter administrator password (default: autogenerated):
Enter orchestrator (k8s, openshift): k8s
Enter the directory to output the deployment bundle to (default: "central-bundle"):
Enter default container images settings (stackrox.io, rhacs, opensource); it controls repositories from where to download the images, image names and tags format (default: "rhacs"): opensource
Enter the method of exposing Central (lb, np, none) (default: "none"):
Enter main image to use(if unset, a default will be used according to --image-defaults) (default: "quay.io/stackrox-io/main:3.71.0"):
Enter whether to run StackRox in offline mode, which avoids reaching out to the Internet (default: "false"):
Enter whether to enable telemetry (default: "true"):
Enter the deployment tool to use (kubectl, helm, helm-values) (default: "kubectl"):
Enter Istio version when deploying into an Istio-enabled cluster (leave empty when not running Istio) (optional):
Enter scanner-db image to use(if unset, a default will be used according to --image-defaults) (default: "quay.io/stackrox-io/scanner-db:3.71.0"):
Enter scanner image to use(if unset, a default will be used according to --image-defaults) (default: "quay.io/stackrox-io/scanner:3.71.0"):
Enter Central volume type (hostpath, pvc): pvc
Enter external volume name (default: "stackrox-db"):
Enter external volume size in Gi (default: "100"):
Enter storage class name (optional if you have a default StorageClass configured):
INFO:   Generating deployment bundle...
INFO:   Deployment bundle includes PodSecurityPolicies (PSPs). This is incompatible with Kubernetes >= v1.25.
INFO:   Use --enable-pod-security-policies=false to disable PodSecurityPolicies.
INFO:   For the time being PodSecurityPolicies remain enabled by default in deployment bundles and need to be disabled explicitly for Kubernetes >= v1.25.
INFO:   Unless run in offline mode,
 StackRox Kubernetes Security Platform collects and transmits aggregated usage and system health information.
  If you want to OPT OUT from this, re-generate the deployment bundle with the '--enable-telemetry=false' flag
INFO:   Done!
INFO:   Wrote central bundle to "central-bundle"
To deploy:
  - If you need to add additional trusted CAs, run central/scripts/ca-setup.sh.
  - Deploy Central
    - Run central/scripts/setup.sh
    - Run kubectl create -R -f central

  - Deploy Scanner
     If you want to run the StackRox Scanner:
     - Run scanner/scripts/setup.sh
     - Run kubectl create -R -f scannerPLEASE NOTE: The recommended way to deploy StackRox is by using Helm. If you have
Helm 3.1+ installed, please consider choosing this deployment route instead. For your
convenience, all required files have been written to the helm/ subdirectory, along with
a README file detailing the Helm-based deployment process.For administrator login, select the "Login with username/password" option on
the login page, and log in with username "admin" and the password found in the
"password" file located in the same directory as this README.

This is tracked internally as ROX-12328

Installation in arbitrary namespace

I am currently facing an issue installing the stack in a namespace which is not "stackrox". Due to policy enforcement we need to create namespaces with a certain prefix, "mcs-". Due to that we need to create the central in this namespace as well.

ROX_NAMESPACE=mcs-stackrox ROX_CENTRAL_ENDPOINT="central.mcs-stackrox.svc:443" ROX_ADVERTISED_ENDPOINT="sensor.mcs-stackrox.svc:443" ROX_SENSOR_ENDPOINT="sensor.mcs-stackrox.svc:443" ROX_SCANNER_GRPC_ENDPOINT="scanner.mcs-stackrox.svc:8443" ./roxctl central generate interactive

Enter path to the backup bundle from which to restore keys and certificates (optional):
Enter PEM cert bundle file (optional): 
Enter administrator password (default: autogenerated):
Enter orchestrator (k8s, openshift): openshift
Enter the directory to output the deployment bundle to (default: "central-bundle"):
Enter the OpenShift major version (3 or 4) to deploy on (default: "0"): 4
Enter Istio version when deploying into an Istio-enabled cluster (leave empty when not running Istio) (optional):
Enter the method of exposing Central (route, lb, np, none) (default: "none"): route 
Enter main image to use (default: "stackrox.io/main:3.0.61.1"):
Enter whether to run StackRox in offline mode, which avoids reaching out to the Internet (default: "false"):
Enter whether to enable telemetry (default: "true"):
Enter the deployment tool to use (kubectl, helm, helm-values) (default: "kubectl"):
Enter Scanner DB image to use (default: "stackrox.io/scanner-db:2.15.2"):
Enter Scanner image to use (default: "stackrox.io/scanner:2.15.2"):
Enter Central volume type (hostpath, pvc): pvc 
Enter external volume name (default: "stackrox-db"):
Enter external volume size in Gi (default: "100"):
Enter storage class name (optional if you have a default StorageClass configured):

However, the manifests all have "stackrox" in the metadata.namespace field:

balpert@omega:~/rox-debug$ grep -r "namespace: " central-bundle/central/
central-bundle/central/01-central-10-networkpolicy.yaml:  namespace: stackrox
central-bundle/central/01-central-10-networkpolicy.yaml:    meta.helm.sh/release-namespace: stackrox
central-bundle/central/01-central-13-service.yaml:  namespace: stackrox
central-bundle/central/01-central-13-service.yaml:    meta.helm.sh/release-namespace: stackrox
central-bundle/central/01-central-14-exposure.yaml:  namespace: stackrox
central-bundle/central/01-central-14-exposure.yaml:    meta.helm.sh/release-namespace: stackrox
central-bundle/central/01-central-14-exposure.yaml:  namespace: stackrox
central-bundle/central/01-central-14-exposure.yaml:    meta.helm.sh/release-namespace: stackrox
central-bundle/central/01-central-02-security.yaml:    meta.helm.sh/release-namespace: stackrox
central-bundle/central/01-central-02-security.yaml:  namespace: stackrox
central-bundle/central/01-central-02-security.yaml:    meta.helm.sh/release-namespace: stackrox
central-bundle/central/01-central-02-security.yaml:    namespace: stackrox
central-bundle/central/01-central-02-security.yaml:    meta.helm.sh/release-namespace: stackrox
central-bundle/central/01-central-02-security.yaml:    meta.helm.sh/release-namespace: stackrox
....

With the advertised deployment in README (oc create -R -f central) all resources are located in the wrong namespace. Also, when inspecting the certificates for the service, the SANs are showing stackrox again:

openssl x509 -in cert.pem -text -noout
...
            X509v3 Subject Alternative Name: 
                DNS:central.stackrox, DNS:central.stackrox.svc
...

Is there a way to deploy the stackrox central into an arbitrary namespace?

Auto-generated internal image registry on a cluster causes central to use the image registry service IP

Hi team,

we currently face an issue in our lab environment where we have

  • an OCP platform A hosting the central and scanner
  • an OCP platform B hosting a secured cluster with sensor, scanner (db), admission controller, collectors

When sensor from platform B starts sending information to platform A, it autogenerates several entries under "Platform Configuration" -> "Integrations" -> "Generic Docker Registry"

image

Now this causes the central to produce several error logs:

sensor/service/connection: 2022/07/26 19:08:45.687299 worker_queue.go:59: Error: Error handling sensor message: error processing message from sensor error: reaching out for TLS check to 172.30.161.169:5000: dial tcp 172.30.161.169:5000: i/o timeout
sensor/service/connection: 2022/07/26 19:08:46.742447 worker_queue.go:59: Error: Error handling sensor message: error processing message from sensor error: reaching out for TLS check to 172.30.161.169:5000: dial tcp 172.30.161.169:5000: connect: no route to host
sensor/service/connection: 2022/07/26 19:18:26.630443 worker_queue.go:59: Error: Error handling sensor message: error processing message from sensor error: reaching out for TLS check to 172.30.161.169:5000: dial tcp 172.30.161.169:5000: i/o timeout
sensor/service/connection: 2022/07/26 19:18:27.734492 worker_queue.go:59: Error: Error handling sensor message: error processing message from sensor error: reaching out for TLS check to 172.30.161.169:5000: dial tcp 172.30.161.169:5000: connect: no route to host
pkg/images/enricher: 2022/07/26 19:18:43.754329 enricher_impl.go:248: Info: Getting metadata for image image-registry.openshift-image-registry.svc:5000/mcs-lifecycle-check/openshift-hello@sha256:19b819016cd1726e8cf519e3b34069baf055ae815d8a4e5b91ab80090487b809
pkg/images/enricher: 2022/07/26 19:18:44.808807 enricher_impl.go:602: Error: Error fetching image signatures for image "image-registry.openshift-image-registry.svc:5000/mcs-lifecycle-check/openshift-hello@sha256:19b819016cd1726e8cf519e3b34069baf055ae815d8a4e5b91ab80090487b809": Get "https://image-registry.openshift-image-registry.svc:5000/v2/": http: non-successful response (status=401 body="{\"errors\":[{\"code\":\"UNAUTHORIZED\",\"message\":\"authentication required\",\"detail\":null}]}\n")

Now I have two theories and wanted to clarify here.

  1. the central tries to actually reach out to the image-registry on platform B:

sensor/service/connection: 2022/07/26 19:18:27.734492 worker_queue.go:59: Error: Error handling sensor message: error processing message from sensor error: reaching out for TLS check to 172.30.161.169:5000: dial tcp 172.30.161.169:5000: connect: no route to host

On Platform A:

balpert@omega:~/rox-debug/sensor$ oc status
In project default on server https://api.t001.otc.mcs-paas.dev:6443

svc/openshift - kubernetes.default.svc.cluster.local
svc/kubernetes - 172.30.0.1:443 -> 6443

balpert@omega:~/rox-debug/sensor$ oc get service -n openshift-image-registry image-registry
NAME             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
image-registry   ClusterIP   172.30.231.213   <none>        5000/TCP   2y263d

On platform B

balpert@omega:~/rox-debug/sensor$ oc status
In project default on server https://api.t007.otc.mcs-paas.dev:6443

svc/openshift - kubernetes.default.svc.cluster.local
svc/kubernetes - 172.30.0.1:443 -> 6443

balpert@omega:~/rox-debug/sensor$ oc get svc -n openshift-image-registry image-registry
NAME             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
image-registry   ClusterIP   172.30.161.169   <none>        5000/TCP   2y261d

But I have verified that I can reach the image-registry from an example pod on platform B in namespace stackrox:

balpert@omega:~/rox-debug/sensor$ oc rsh -n stackrox example
sh-4.4$ curl https://image-registry.openshift-image-registry.svc.cluster.local:5000 -kI
HTTP/2 200 
cache-control: no-cache
date: Tue, 26 Jul 2022 19:49:01 GMT
  1. the central is reaching out to the image registry on platform A instead
    Now what bothers me as well is the second part of the logs above:
pkg/images/enricher: 2022/07/26 19:18:44.808807 enricher_impl.go:602: Error: Error fetching image signatures for image "image-registry.openshift-image-registry.svc:5000/mcs-lifecycle-check/openshift-hello@sha256:19b819016cd1726e8cf519e3b34069baf055ae815d8a4e5b91ab80090487b809": Get "https://image-registry.openshift-image-registry.svc:5000/v2/": http: non-successful response (status=401 body="{\"errors\":[{\"code\":\"UNAUTHORIZED\",\"message\":\"authentication required\",\"detail\":null}]}\n")

This makes me believe that central is trying to resolve image-registry.openshift-image-registry.svc but receives the service IP from platform A. Now I don't see any evidence for this apart of that there is no autogenerated image registry for platform A (or it is not shown under "Integrations" -> "Docker Registry").

Hopefully someone can clarify how I am supposed to set up the secured cluster to actually scan the image registry on platform B.

Best regards

Scanner localdev error tar unexpected EOF

When running scanner localdev against some test images results in the following the following error.

ERRO[0014] error reading "079bc5e75545bf45253ab44ce73fbd51d96fa52ee799031e60b65a82e89df662/layer.tar": EOF

After doing some digging if I increase the tarutils maxLazyReaderBufferSize to something large enough to avoid disk, the tar is read successfully. I suspect there might be an issue with the disk backed buffer.

PSP not needed on OpenShift

Helm charts 70.0.0 create PSP resources when installing on OpenShift where PSP are not available/needed. Suggestion to remove PSP when detecting installation to OpenShift.

Policy image exclusion

When creating a policy in ACS (v3.71.0), is it possible to exclude images by registry or repo?

It would appear (from the "policy scope" page) that I can only do this by selecting (one or many) individual images, which are then referencing specific tags?

image

As shown here, I think I'm limited to the specific image versions in the list:

image

When I select the images, it cannot remove tags to make it more generic, i.e., by image repository or at the registry level.

For example, perhaps I want a policy "don't allow root user" to be applied to everything except my image examplereg.com/rootimage and I don't want this to be limited to just the current version, because future versions will need the same exemption.

Or, I might want a policy based on a specific CVSS rating threshold to apply to all images in my dev registry devexample.com/*, but I want a policy with a different CVSS rating threshold applied to my Pre-Production registry preprodexample.com/*.

Have I missed where I can do this, or is this functionality missing from ACS/Stackrox?

eBPF Probe error on Digital Ocean Kubernetes cluster

Hello, I have installed Stackrox on a DO kubernetes cluster. The collector pods are bouncing between Running and CrashLoopBackOff due to the below error.

[I 20220826 205933 CollectorConfig.cpp:149] User configured collection-method=ebpf
[I 20220826 205933 CollectorConfig.cpp:206] Afterglow is enabled
[I 20220826 205933 collector.cpp:302] Module version: 2.0.1
[I 20220826 205934 collector.cpp:329] Attempting to download eBPF probe - Candidate kernel versions:
[I 20220826 205934 collector.cpp:331] 5.10.0-0.bpo.15-amd64
[I 20220826 205934 GetKernelObject.cpp:180] Local storage does not contain collector-ebpf-5.10.0-0.bpo.15-amd64.o
[I 20220826 205934 FileDownloader.cpp:316] Fail to download /module/collector-ebpf.o.gz - Failed writing body (0 != 10)
[I 20220826 205934 FileDownloader.cpp:318] HTTP Request failed with error code '404' - HTTP Body Response: not found

[I 20220826 205935 FileDownloader.cpp:316] Fail to download /module/collector-ebpf.o.gz - Failed writing body (0 != 10)
[I 20220826 205935 FileDownloader.cpp:318] HTTP Request failed with error code '404' - HTTP Body Response: not found

..........




[W 20220826 210003 FileDownloader.cpp:332] Failed to download /module/collector-ebpf.o.gz
[W 20220826 210003 GetKernelObject.cpp:183] Unable to download kernel object collector-ebpf-5.10.0-0.bpo.15-amd64.o to /module/collector-ebpf.o.gz
[W 20220826 210003 collector.cpp:343] Error getting kernel object: collector-ebpf-5.10.0-0.bpo.15-amd64.o
[I 20220826 210003 collector.cpp:215] gRPC server=sensor.stackrox.svc:443
[I 20220826 210003 collector.cpp:357] Attempting to connect to GRPC server
[E 20220826 210003 collector.cpp:359] Unable to connect to the GRPC server.
[F 20220826 210003 collector.cpp:368] No suitable kernel object downloaded

How can I troubleshoot?

Scanner integration trivy

Today there are support for a number of registries and scanners.
Personally I use trivy to scan my images using my CI/CD environment and to be consistent with existing vulnerabilities towards my developers I would like to use trivy as a image scanner.

Trivy can be run in client server mode https://www.youtube.com/watch?v=tNQ-VlahtYM and with a simple api request you can information about CVE:s in your container.

Vulnerability scan doesn't include dependency libs (OpenSSL)

I was playing around with multiple vulnerability scanning tools in Kubernetes. While doing this, I experienced that Stackrox is not flagging dependency libraries from (OS?) packages.

Example:
We've a pod running based on Alpine alpine:v3.14. During installation the following command has been executed:
apk add --update --no-cache openssl

This will install OpenSSL version:
OpenSSL 1.1.1n 15 Mar 2022 (Library: OpenSSL 1.1.1l 24 Aug 2021)

So the cli package OpenSSL contains the latest patched version but the dependency libraries (libcrypto.so and libssl.so) are one minor version behind.

It looks like Stackrox is only looking at the main library and not the dependencies for determine if there are active vulnerabilities. Where, for example, Sysdig does find vulnerabilities for both OpenSSL 1.1.1n and 1.1.1l.

CVE-2022-0778 is available in 1.1.1l, Stackrox doesn't flag this one, Sysdig does.

More information about (OpenSSL) main vs library versions and why their are not always in line:

Here some proof that libss is actually on a previous version:
apk list | grep libssl libssl1.1-1.1.1l-r0 x86_64 {openssl} (OpenSSL) [installed] libssl1.1-1.1.1n-r0 x86_64 {openssl} (OpenSSL) [upgradable from: libssl1.1-1.1.1l-r0]

Some question about the product

Hello everyone,

My team is using this product on an Openshift infrastructure and it works great !

But, I have some questions about this :

  • which components or functions create temp files and what is the mean size of created files (is there also an auto-clean function for these files ?)
  • is there a way to have access log (like apache access log) ?
  • (a more Red-hat concern question) is there a way to monitor new release of different images of the product (on Red hat registries) like an RSS feed or mail notifications ?

Thank you,

~question

Sensor pod crashed

Hello,
I am using Stackrox in k8s approx 1-2 weeks. Today Sensos pod crashed (CrashLoopBackoff). Logs give me the following. Please help

No certificates found in /usr/local/share/ca-certificates
No certificates found in /etc/pki/injected-ca-trust
main: 2022/05/10 11:38:35.261465 main.go:28: Info: Running StackRox Version: 3.69.x-569-g769804636a
kubernetes/sensor: 2022/05/10 11:38:35.265931 sensor.go:73: Info: Loaded Helm cluster configuration with fingerprint "fb12e0e60b6db042d0d52966999774256e7eb3c88aea8bc1af00694346927bae"
kubernetes/sensor: 2022/05/10 11:38:35.296520 sensor.go:91: Info: Determined deployment identification: {
"systemNamespaceId": "3582c397-0ae3-11ea-8402-067ff35c4130",
"defaultNamespaceId": "36b49dc0-0ae3-11ea-8402-067ff35c4130",
"appNamespace": "stackrox",
"appNamespaceId": "21cc6be0-3b8f-45d7-a6d8-a40360aa0db8",
"appServiceaccountId": "fb6f0e86-3634-48f4-9437-f2e8a864496b",

Condition to evaluate central deployments' readiness

Currently, stackrox has a couple of possible ConditionReasons we can use to check if the Central is healthy. These are defined in

const (
ConditionInitialized ConditionType = "Initialized"
ConditionDeployed ConditionType = "Deployed"
ConditionReleaseFailed ConditionType = "ReleaseFailed"
ConditionIrreconcilable ConditionType = "Irreconcilable"
StatusTrue ConditionStatus = "True"
StatusFalse ConditionStatus = "False"
StatusUnknown ConditionStatus = "Unknown"
ReasonInstallSuccessful ConditionReason = "InstallSuccessful"
ReasonUpgradeSuccessful ConditionReason = "UpgradeSuccessful"
ReasonUninstallSuccessful ConditionReason = "UninstallSuccessful"
ReasonInstallError ConditionReason = "InstallError"
ReasonUpgradeError ConditionReason = "UpgradeError"
ReasonReconcileError ConditionReason = "ReconcileError"
ReasonUninstallError ConditionReason = "UninstallError"
)

We use GitOps through ArgoCD and implemented this custom health check for the Central CRD:
image

The problem is, that the InstallSuccessful ConditionReason is set before the deployments, managed by central, are healthy:image

Ideally we'd want the the central resource to only show as healthy after its deployments (and other resources it manages) are healthy as well. I would propose to implement the existing conditions with a some sort of "Ready" ConditionReason, for when all the other resources managed by central are healthy.

clair endpoint for integration

Which clair endpoint is supposed to be used for the integration with stackrox?

I've a clair instance running, but use a bare hostname or https://hostname in the Endpoint configuration just gives 404

PVC waiting for first consumer to be created before binding

The stackrox/stackrox-central-services:70.1.0 Helm chart is creating a PVC with a helm.sh/hook: pre-install,pre-upgrade. On clusters that have a StorageClass configured with WaitForFirstConsumer as volume binding mode the hook will never finish because the PV that underpins the PVC will only be created once a Deployment tries to mount it.

One example where this is happening is when using the default StorageClass on an Azure AKS cluster.

My proposal to fix this would be to remove the helm hook annotation on the PVC. Let me know if that sounds ok and i'll prepare a PR to that effect.

sensor - detector.go:607: Error: Error looking up destination entity details while running network flow policy

Hey all - Have a simple deployment of OpenShift w/ Stackrox and I see this int he Sensor logs:

common/detector: 2022/09/23 19:22:25.767675 detector.go:602: Error: Error looking up source entity details while running network flow policy: Deployment with ID: "6114804e-b8e4-46a6-90d8-ca68b0f4e0b4" not found while trying to run 
network flow policy
common/detector: 2022/09/23 19:22:25.767722 detector.go:607: Error: Error looking up destination entity details while running network flow policy: Deployment with ID: "6114804e-b8e4-46a6-90d8-ca68b0f4e0b4" not found while trying to
 run network flow policy
common/detector: 2022/09/23 19:22:25.767578 detector.go:602: Error: Error looking up source entity details while running network flow policy: Deployment with ID: "6114804e-b8e4-46a6-90d8-ca68b0f4e0b4" not found while trying to run 
network flow policy
common/detector: 2022/09/23 19:22:25.767678 detector.go:607: Error: Error looking up destination entity details while running network flow policy: Deployment with ID: "6114804e-b8e4-46a6-90d8-ca68b0f4e0b4" not found while trying to
 run network flow policy
common/detector: 2022/09/23 19:22:25.767719 detector.go:607: Error: Error looking up destination entity details while running network flow policy: Deployment with ID: "6114804e-b8e4-46a6-90d8-ca68b0f4e0b4" not found while trying to
 run network flow policy
common/detector: 2022/09/23 19:22:25.767741 detector.go:607: Error: Error looking up destination entity details while running network flow policy: Deployment with ID: "6114804e-b8e4-46a6-90d8-ca68b0f4e0b4" not found while trying to
 run network flow policy
common/detector: 2022/09/23 19:22:25.767746 detector.go:602: Error: Error looking up source entity details while running network flow policy: Deployment with ID: "6114804e-b8e4-46a6-90d8-ca68b0f4e0b4" not found while trying to run 
network flow policy
common/detector: 2022/09/23 19:22:25.767752 detector.go:602: Error: Error looking up source entity details while running network flow policy: Deployment with ID: "6114804e-b8e4-46a6-90d8-ca68b0f4e0b4" not found while trying to run 
network flow policy
common/detector: 2022/09/23 19:22:25.767769 detector.go:602: Error: Error looking up source entity details while running network flow policy: Deployment with ID: "6114804e-b8e4-46a6-90d8-ca68b0f4e0b4" not found while trying to run 
network flow policy
common/detector: 2022/09/23 19:22:25.767812 detector.go:607: Error: Error looking up destination entity details while running network flow policy: Deployment with ID: "6114804e-b8e4-46a6-90d8-ca68b0f4e0b4" not found while trying to
 run network flow policy
common/detector: 2022/09/23 19:22:25.767975 detector.go:607: Error: Error looking up destination entity details while running network flow policy: Deployment with ID: "6114804e-b8e4-46a6-90d8-ca68b0f4e0b4" not found while trying to
 run network flow policy

Doesn't seem like it is impacting the overall use of the sensor, but i am curious if it is missing metrics?

Multi-Arch Image Support

First of all, congratulations on open-sourcing the project. I'm deploying Stackrox at home to play around and get used to the interface, in addition to increasing the security posture of my home-based k3s cluster. Upon installation of the platform, it would appear that there is no way for the collector daemonset to run on my arm64 nodes. From a cursory look on quay.io/stackrox-io it would appear that the images that are being built are not multi-arch images. I realize that other architectures aren't necessarily popular or provide major wins in terms of business value, however for some admins the only way to experience the software first hand is to install it on a handful of Raspberry Pis running in their basement :)

In lieu of the multi arch images, I also looked at the helm chart for secured-cluster-services and also noticed that there was no way to create a nodeselector for the collector daemonset, so there's no real way for me to prevent the collector pods from ImagePullBackOff when they inevitably schedule on my arm64 nodes. (this is more of a workaround for this issue, but could very well be an issue in it's own right for the helm chart repo).

Is there any interest in supporting multi-arch images moving forward? I know shoehorning multi-arch into the build process isn't always the easiest ask in the world.

thanks!

Root web url

is there anyway to change the root prefix for stackrox to /stackrox or something like that?

Cannot use OpenShift OAuth with OKD 4.8

Hi,

I am trying to set up Stackrox with OpenShift OAuth as as Auth providers.

When I try to add the provider I'll get the following error:

unable to create an auth provider instance: unable to create backend for provider id xxxx-xxx-xxxx-xxxx: failed to create dex openshiftConnector for OpenShift's OAuth Server: failed to query OpenShift endpoint: Get "https://openshift.default.svc/.well-known/oauth-authorization-server": Service Unavailable

I am not sure where StackRox get this endpoint from, I guess this is hardcoded?

The central pod logs shows nothing regarding this task.

What can I provide to solve the issue?

OKD version: 4.8
StackRox: 3.71 installed via Helm

Regards

Azure ACR integration node managed identity (NMI) support

Just like AWS have kube2iam and similar solution (https://docs.openshift.com/acs/3.69/integration/integrate-with-image-registries.html#use-assumerole-with-ecr) Azure got aad pod identity.

Instead of using a service account with a annotation as you do in AWS you label your pod.
Here we can find a example implementation how this was solved in aquasecurity fanal aquasecurity/fanal#371 and indirectly in to trivy.

You can find more information here:

https://docs.microsoft.com/en-us/azure/aks/operator-best-practices-identity#use-pod-managed-identities

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.