Code Monkey home page Code Monkey logo

kubearmor-client's Introduction

Build Status CII Best Practices CLOMonitor OpenSSF Scorecard FOSSA Status FOSSA Status Slack Discussions Docker Downloads ArtifactHub

KubeArmor is a cloud-native runtime security enforcement system that restricts the behavior (such as process execution, file access, and networking operations) of pods, containers, and nodes (VMs) at the system level.

KubeArmor leverages Linux security modules (LSMs) such as AppArmor, SELinux, or BPF-LSM to enforce the user-specified policies. KubeArmor generates rich alerts/telemetry events with container/pod/namespace identities by leveraging eBPF.

πŸ’ͺ Harden Infrastructure
⛓️ Protect critical paths such as cert bundles
πŸ“‹ MITRE, STIGs, CIS based rules
πŸ›… Restrict access to raw DB table
πŸ’ Least Permissive Access
πŸš₯ Process Whitelisting
πŸš₯ Network Whitelisting
πŸŽ›οΈ Control access to sensitive assets
πŸ”­ Application Behavior
🧬 Process execs, File System accesses
🧭 Service binds, Ingress, Egress connections
πŸ”¬ Sensitive system call profiling
❄️ Deployment Models
☸️ Kubernetes Deployment
πŸ‹ Containerized Deployment
πŸ’» VM/Bare-Metal Deployment

Architecture Overview

KubeArmor High Level Design

Documentation πŸ““

Contributors πŸ‘₯

Biweekly Meeting

Notice/Credits 🀝

  • KubeArmor uses Tracee's system call utility functions.

CNCF

KubeArmor is Sandbox Project of the Cloud Native Computing Foundation. CNCF SandBox Project

ROADMAP

KubeArmor roadmap is tracked via KubeArmor Projects

kubearmor-client's People

Contributors

achrefbensaad avatar aishwarya25252 avatar ankurk99 avatar aryan-sharma11 avatar daemon1024 avatar delusionaloptimist avatar essietom avatar kranurag7 avatar lekaf974 avatar nam-jaehyun avatar nyrahul avatar prateeknandle avatar primalpimmy avatar rajasahil avatar renovate[bot] avatar rksharma95 avatar rootxrishabh avatar seswarrajan avatar sheharyaar avatar slayer321 avatar stefin9898 avatar therealsibasishbehera avatar tico88612 avatar vishalrajofficial avatar vishnusomank avatar vyom-yadav avatar wazir-ahmed avatar xiao-jay avatar yasin-cs-ko-ak avatar zhy76 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

kubearmor-client's Issues

`karmor recommend` bugs/enhancements

  • Missing Policies in Report.
We don't have cert -access policy in report but the YAML is present in out directory

image

  • Use Policy Names instead of relative paths in Table writer. The directory path can be printed as part of other details prior to the Table thereby reducing clutter.
  • Read /etc/os-release and provide additional preconditions options for distros and rules.
  • Improve message for when there's no discovery engine. It should just simply mention that there's no runtime based recommendation instead of printing out the debug error.
  • Potential improvement to Speed? It's understandable for the first time but it still takes a while for me on the subsequent calls for it to process and report things.

cc @nyrahul @vishnusomank

segfault while using karmor recommend

karmor version: 0.9.5

INFO[0045] dumped image to tar                           tar=/tmp/karmor2162624332/wbdoJOYh.tar
created policy out/kubernetes-dashboard-kubernetes-dashboard/kubernetesui-dashboard-v2.6.1-password-protect.yaml ...
INFO[0046] pulling image                                 image="quay.io/cilium/operator-generic:v1.11.3@sha256:5b81db7a32cb7e2d00bb3cf332277ec2b3be239d9e94a8d979915f4e6648c787"
quay.io/cilium/operator-generic@sha256:5b81db7a32cb7e2d00bb3cf332277ec2b3be239d9e94a8d979915f4e6648c787: Pulling from cilium/operator-generic
bc877eec10d7: Pull complete 
78ea17f4e2e5: Pull complete 
508c65bb69fc: Pull complete 
34887728791f: Pull complete 
Digest: sha256:5b81db7a32cb7e2d00bb3cf332277ec2b3be239d9e94a8d979915f4e6648c787
Status: Downloaded newer image for quay.io/cilium/operator-generic@sha256:5b81db7a32cb7e2d00bb3cf332277ec2b3be239d9e94a8d979915f4e6648c787
INFO[0055] dumped image to tar                           tar=/tmp/karmor1431945542/JspJTTYI.tar
panic: interface conversion: interface {} is nil, not []interface {}

goroutine 1 [running]:
github.com/kubearmor/kubearmor-client/recommend.(*ImageInfo).readManifest(0xc00090c210, {0xc000834000?, 0xc000212b00?})
	/home/runner/work/kubearmor-client/kubearmor-client/recommend/imageHandler.go:280 +0x838
github.com/kubearmor/kubearmor-client/recommend.(*ImageInfo).getImageInfo(0xc00090c210)
	/home/runner/work/kubearmor-client/kubearmor-client/recommend/imageHandler.go:387 +0x22d
github.com/kubearmor/kubearmor-client/recommend.getImageDetails({0xc0004554b0, 0xb}, {0xc000455490, 0xf}, 0xc000161ad0, {0xc000d79b90, 0x6f})
	/home/runner/work/kubearmor-client/kubearmor-client/recommend/imageHandler.go:407 +0x1cd
github.com/kubearmor/kubearmor-client/recommend.imageHandler({0xc0004554b0, 0xb}, {0xc000455490, 0xf}, 0x4172d3?, {0xc000d79b90, 0x6f})
	/home/runner/work/kubearmor-client/kubearmor-client/recommend/imageHandler.go:423 +0x1ae
github.com/kubearmor/kubearmor-client/recommend.handleDeployment({{0xc000455490, 0xf}, {0xc0004554b0, 0xb}, 0xc000161ad0, {0xc00040ffb0, 0x1, 0x1}})
	/home/runner/work/kubearmor-client/kubearmor-client/recommend/recommend.go:153 +0x1a6
github.com/kubearmor/kubearmor-client/recommend.Recommend(0xc0005a7400, {{0x3071230, 0x0, 0x0}, {0x3071230, 0x0, 0x0}, {0x0, 0x0, 0x0}, ...})
	/home/runner/work/kubearmor-client/kubearmor-client/recommend/recommend.go:136 +0x3fc
github.com/kubearmor/kubearmor-client/cmd.glob..func11(0x301c380?, {0x1de561f?, 0x0?, 0x0?})
	/home/runner/work/kubearmor-client/kubearmor-client/cmd/recommend.go:19 +0x58
github.com/spf13/cobra.(*Command).execute(0x301c380, {0x3071230, 0x0, 0x0})
	/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:856 +0x67c
github.com/spf13/cobra.(*Command).ExecuteC(0x301c600)
	/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:974 +0x3b4
github.com/spf13/cobra.(*Command).Execute(...)
	/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:902
github.com/kubearmor/kubearmor-client/cmd.Execute()
	/home/runner/work/kubearmor-client/kubearmor-client/cmd/root.go:49 +0x25
main.main()
	/home/runner/work/kubearmor-client/kubearmor-client/main.go:10 +0x17

kArmor port-forward

kArmor log currently looks for KubeArmor at 32767 to to start streaming telemtry, but if the service is not port forwarded it fails and shows the following message
image

We should have an inbuilt functionality to mimic the functionality of kubectl port-forward added with discovering KubeArmor automatically and then forwarding the same service.

Observability support for kArmor

  • support different build options (kubearmor, cilium, discovery)
    • show build options in the karmor version
  • Handle karmor observe #54
    • Support different filters
  • Handle karmor policy discovery (with filters)

kubearmor default posture as an input to karmor install

Currently, Kubearmor supports a default posture of block. This posture is configurable through command line parameters, but these parameters are not supported by karmor.

Requirement to support:

karmor install --defaultposture audit/block

Prettify and Update Install Subcommand πŸ€–

Current karmor install command looks bleak, we need to jazz up the installation experience with some πŸš€ πŸ€– 😸 emojis and perhaps animations as well.

We currently just instantiate kubernetes api calls and leave checking the status of the installation to the user. We should wait for the various installations to be successful and only exit once kubearmor is running. This wait time also gives us some room to play animations for a better experience πŸ˜„

Work Items

  • Wait for KubeArmor to start running before exiting the install subcommand
  • Jazz Up the Experience

check BPF-LSM for enforcement in karmor probe

CentOS 8.5 (kernel 4.18) contains BPF-LSM as enforcer and support both observability and enforement

Running karmor probe without KubeArmor installed:

Host:
	Observability/Audit: Supported (Kernel Version 4.18.0)
	Enforcement: Partial (Supported LSMs: capability,yama,selinux,bpf) 
	To have full enforcement support, apparmor must be supported

Expected
Enforcement: Full

We should also check if bpf as a LSM is available for enforcment

karmor summary showing incomplete data when used with --agg flag

karmor summary is showing serviceaccount token access from knoxAutoPolicy binary, but when used with --agg flag the data is getting skipped

karmor summary -n explorer      

  Pod Name        knoxautopolicy-8587dfd464-mrz6b  
  Namespace Name  explorer                         
  Cluster Name    default                          
  Container Name  knoxautopolicy                   
  Labels          container=knoxautopolicy         

File Data
+-----------------+---------------------------------------------------------------------------------+-------+------------------------------+--------+
|   SRC PROCESS   |                              DESTINATION FILE PATH                              | COUNT |      LAST UPDATED TIME       | STATUS |
+-----------------+---------------------------------------------------------------------------------+-------+------------------------------+--------+
| /knoxAutoPolicy | /accuknox-obs.db                                                                | 34    | Thu Oct  6 06:24:01 UTC 2022 | Allow  |
| /knoxAutoPolicy | /run/secrets/kubernetes.io/serviceaccount/..2022_10_06_06_05_10.034039894/token | 10    | Thu Oct  6 06:23:41 UTC 2022 | Allow  |
| /knoxAutoPolicy | /accuknox.db                                                                    | 17    | Thu Oct  6 06:24:01 UTC 2022 | Allow  |
+-----------------+---------------------------------------------------------------------------------+-------+------------------------------+--------+


Ingress connections
+----------+-----------------+------------+------+-----------+--------+
| PROTOCOL |     COMMAND     | POD/SVC/IP | PORT | NAMESPACE | LABELS |
+----------+-----------------+------------+------+-----------+--------+
| TCPv6    | /knoxAutoPolicy | 127.0.0.1  | 9089 |           |        |
+----------+-----------------+------------+------+-----------+--------+


Egress connections
+----------+-----------------+----------------+------+-----------+-----------------------------------------+
| PROTOCOL |     COMMAND     |   POD/SVC/IP   | PORT | NAMESPACE |                 LABELS                  |
+----------+-----------------+----------------+------+-----------+-----------------------------------------+
| TCP      | /knoxAutoPolicy | svc/kubernetes | 443  | default   | component=apiserver,provider=kubernetes |
+----------+-----------------+----------------+------+-----------+-----------------------------------------+

karmor summary -n explorer --agg

  Pod Name        knoxautopolicy-8587dfd464-mrz6b  
  Namespace Name  explorer                         
  Cluster Name    default                          
  Container Name  knoxautopolicy                   
  Labels          container=knoxautopolicy         

File Data
+-----------------+-----------------------+-------+------------------------------+--------+
|   SRC PROCESS   | DESTINATION FILE PATH | COUNT |      LAST UPDATED TIME       | STATUS |
+-----------------+-----------------------+-------+------------------------------+--------+
| /knoxAutoPolicy |                       | 61    | Thu Oct  6 06:24:01 UTC 2022 | Allow  |
+-----------------+-----------------------+-------+------------------------------+--------+


Ingress connections
+----------+-----------------+------------+------+-----------+--------+
| PROTOCOL |     COMMAND     | POD/SVC/IP | PORT | NAMESPACE | LABELS |
+----------+-----------------+------------+------+-----------+--------+
| TCPv6    | /knoxAutoPolicy | 127.0.0.1  | 9089 |           |        |
+----------+-----------------+------------+------+-----------+--------+


Egress connections
+----------+-----------------+----------------+------+-----------+-----------------------------------------+
| PROTOCOL |     COMMAND     |   POD/SVC/IP   | PORT | NAMESPACE |                 LABELS                  |
+----------+-----------------+----------------+------+-----------+-----------------------------------------+
| TCP      | /knoxAutoPolicy | svc/kubernetes | 443  | default   | component=apiserver,provider=kubernetes |
+----------+-----------------+----------------+------+-----------+-----------------------------------------+

According to the help it should aggregate based on the destination files/folder

karmor summary -h               
Discovery engine keeps the telemetry information from the policy enforcement engines and the karmor connects to it to provide this as observability data

Usage:
  karmor summary [flags]

Flags:
      --agg                Aggregate destination files/folder path
karmor version   
karmor version 0.9.9 linux/amd64 BuildDate=2022-09-29T06:37:07Z
current version is the latest
kubearmor image (running) version kubearmor/kubearmor:stable

Auto terminate Log Watcher once disconnected from KubeArmor

We currently don't terminate our log watchers once we get an EOF from KubeArmor, We need to manually send a SIGKILL or any other relevant signal to stop the process. We should auto exit the watcher and terminate the process once we receive this.

image

Hi, If it is the first time that you contribute to KubeArmor, follow these steps:
Write a comment in this issue thread to let other possible contributors know that you are working on this bug. For eg : Hey all, I would like to work on this issue., checkout Contributing Guide πŸ”₯✨ and feel free to ask anything related to this issue in this thread or on our slack channel ✌🏽

handling policy-templates

Prepare rules.yaml based on policy-templates repo.

  • prepare sample metadata.yaml
  • prepare release of policy-templates
  • karmor recommend --update
  • karmor recommend check whether latest policy-templates are available.

kubearmor policy simulator

What is the problem statement and use-case for this?

Kubearmor supports policy enforcement and this enforcement is guided by two factors:

  1. The policy itself
  2. Kubearmor configuration (such as defaultPosture, auditOnlyMode etc)

It is possible to put up a document to explain all the combination as in, if X is the policy and Y is the configuration and if we get an action exec/fopen/network-connect etc, how will the policy react? However, using a document for such purpose seems non-efficient. Also it can very easily get complicated since every configuration has multiple values (for e.g. defaultPosture could be Audit/Block/Allow). As an example, check the discussion on this PR. It is better if we can simulate the event/action-result given the inputs. Later on, we can have a web-page hosted that can depict this in a more user friendly way.

Example cli

karmor simulate --config kubearmor.cfg --policy policy.yaml --action "exec:/bin/sleep"
karmor simulate --config kubearmor.cfg --policy policy.yaml --action "exec:/bin/bash->/bin/sleep" ... specifying ls spawned as a child process of bash
Sample policy.yaml
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: ksp-group-1-proc-path-block
  namespace: multiubuntu
spec:
  selector:
    matchLabels:
      group: group-1
  process:
    matchPaths:
    - path: /bin/sleep
  action:
    Block

Sample run

Assuming the command used is:

karmor simulate --config kubearmor.cfg --policy policy.yaml --action "exec:/bin/sleep"

Expected Output:

Action: Block

Telemetry Event:
== Alert ==
Cluster Name: unknown
Host Name: unknown
Namespace Name: unknown
Pod Name: unknown
Container ID: unknown
Container Name: unknown
Labels: unknown
Policy: policy.yaml
Severity: 1
Type: MatchedPolicy
Source: /bin/sleep
Operation: Process
Resource: /bin/sleep
Data: syscall=SYS_EXECVE
Action: Block
Result: Permission denied

Requirements:

  • support multiple policies
  • support configuration file for specifying kubearmor cfg
  • support process based rules
  • file based rules
  • network based rules
  • show the output action
  • show the telemetry event that would be generated
  • possible to specify multiple actions
  • Support atleast following actions
    • process action: exec
    • file action: fopen
    • network action: socket, connect, accept

CC: @nam-jaehyun (was his idea)

`karmor install` doesn't install latest CRDs

When installing KubeArmor using karmor install, the latest CRDs are not installed.
This right now leads to new rules added in policy file, like network protocol: raw being unsupported.

`karmor summary` not showing pods infos

➜  ~ karmor summary
Error: rpc error: code = Unimplemented desc = unknown service v1.observability.Observability

When I run karmor summary command that is the output.

Also port-forward is working fine.

➜  ~ kubectl port-forward -n explorer service/knoxautopolicy --address 0.0.0.0 --address :: 9089:9089 &                           
[1] 25901
➜  ~ Forwarding from 0.0.0.0:9089 -> 9089
Forwarding from [::]:9089 -> 9089
Handling connection for 9089

But I still get this error.

Add option to view list of applied policies

Add ./karmor list-policy <containername>/<podname>/hostto view list of currently applied policies.
Currently we are backing up the policies file in /opt/kubearmor/policies, they can be used to retrieve the applied policies.

policy recommendations/reporting

Aim

  1. identify possible security gaps
  2. recommend policies based on
    1. container image
    2. k8s deployment manifest
    3. runtime data
  3. keep performance impact in mind
  4. Extending the recommendations to non-docker registries

Based on container image

karmor report --image "homeassistant:latest" --yamldir "policies" --output report.pdf
    ... yamldir will contain the set of recommended policies
  1. Audit /sbin/ ... reason
  2. disable write to /boot folder on the host
  3. Block access to following folders recursively:
        /usr/share/ca-certificates
        /etc/ssl/
        recursive: true

Block process execution

/usr/sbin/update-ca-certificates

namespace
deployment/workload
Application
1. Description: audit access to /sbin/
Reason: sbin contains maintainance tools .... (tooltip)
...
3.

Based on k8s manifest

karmor report --k8s-manifest deployment.yaml
  1. based on container images ... recommend polices
  2. based on volume mount points ...
  3. k8s secrets ... audit k8s secrets

Based on SBOM

Based on runtime behavior

karmor report -n namespace -l "app=wordpress"
  1. based on container images ... recommend polices
  2. based on volume mount points ...
  3. k8s secrets ... we see these bins accessing these k8s-secrets ...
  4. Processes -> 80% of bins never used ... 10% are risky .. audit policy recommended

Based on kube-bench/trivy

future

  1. mapping it to compliance, mitre TTPs
  2. STIG hardening

Output

json => html/markdown/stdout => pdf
Look at the output of https://github.com/aquasecurity/trivy

design

[How do we want to structure the final report?]

[common way to report an output]
Reporting API

[image scanner]
[k8s scanner]
[runtime scanner]

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Rate-Limited

These updates are currently rate-limited. Click on a checkbox below to force their creation now.

  • fix(deps): update github.com/kubearmor/kubearmor/pkg/kubearmorcontroller digest to 322dd5a
  • fix(deps): update github.com/kubearmor/kubearmor/pkg/kubearmoroperator digest to 322dd5a
  • fix(deps): update github.com/kubearmor/kubearmor/protobuf digest to 322dd5a
  • fix(deps): update github.com/kubearmor/kvmservice/src/types digest to 54a4afe
  • fix(deps): update golang.org/x/exp digest to 7f521ea
  • chore(deps): update ossf/scorecard-action action to v2.3.3
  • chore(deps): update dependency go to v1.22.4
  • chore(deps): update github/codeql-action action to v3.25.10
  • chore(deps): update helm/kind-action action to v1.10.0
  • fix(deps): update module github.com/charmbracelet/bubbletea to v0.26.4
  • fix(deps): update module github.com/fatih/color to v1.17.0
  • fix(deps): update module github.com/rs/zerolog to v1.33.0
  • fix(deps): update module golang.org/x/mod to v0.18.0
  • fix(deps): update module golang.org/x/sync to v0.7.0
  • fix(deps): update module golang.org/x/sys to v0.21.0
  • fix(deps): update module google.golang.org/grpc to v1.64.0
  • fix(deps): update module google.golang.org/protobuf to v1.34.2
  • fix(deps): update module helm.sh/helm/v3 to v3.15.2
  • chore(deps): update actions/checkout action
  • chore(deps): update actions/upload-artifact action to v4
  • chore(deps): update goreleaser/goreleaser-action action to v6
  • fix(deps): update module github.com/docker/docker to v26
  • πŸ” Create all rate-limited PRs at once πŸ”

Open

These updates have all been created already. Click a checkbox below to force a retry/rebase of any.

Detected dependencies

github-actions
.github/workflows/broken-link-check.yml
  • celinekurpershoek/link-checker v1.0.2
.github/workflows/ci-ginkgo-test.yml
  • actions/checkout v2
  • actions/setup-go v5
  • helm/kind-action v1.9.0
.github/workflows/ci-go.yml
  • actions/checkout v2
  • actions/setup-go v5
  • actions/checkout v2
  • actions/setup-go v5
  • actions/checkout v2
  • actions/setup-go v5
  • actions/checkout v2
  • morphy2k/revive-action v2
  • actions/checkout v2
  • actions/setup-go v5
.github/workflows/codeql-analysis.yml
  • actions/checkout v2
  • github/codeql-action v3
  • actions/setup-go v5
  • github/codeql-action v3
  • github/codeql-action v3
.github/workflows/release.yml
  • actions/checkout v2
  • actions/setup-go v5
  • goreleaser/goreleaser-action v2
.github/workflows/scorecard.yml
  • actions/checkout v4.1.1@b4ffde65f46336ab88eb53be808477a3936bae11
  • ossf/scorecard-action v2.3.1@0864cf19026789058feabb7e87baa5f140aac736
  • actions/upload-artifact v3@97a0fba1372883ab732affbe8f94b823f91727db
  • github/codeql-action v3.24.9@1b1aada464948af03b950897e5eb522f92603cc2
gomod
go.mod
  • go 1.21.0
  • go 1.21.9
  • github.com/blang/semver v3.5.1+incompatible
  • github.com/cilium/cilium v1.14.5
  • github.com/clarketm/json v1.17.1
  • github.com/docker/docker v25.0.5+incompatible
  • github.com/fatih/color v1.16.0
  • github.com/json-iterator/go v1.1.12
  • github.com/kubearmor/KubeArmor/protobuf v0.0.0-20240313131335-9ae900daa38d@9ae900daa38d
  • github.com/mholt/archiver/v3 v3.5.1
  • github.com/moby/term v0.5.0
  • github.com/olekukonko/tablewriter v0.0.5
  • github.com/rhysd/go-github-selfupdate v1.2.3
  • github.com/rs/zerolog v1.29.1
  • github.com/sirupsen/logrus v1.9.3
  • github.com/spf13/cobra v1.8.0
  • golang.org/x/exp v0.0.0-20240222234643-814bf88cf225@814bf88cf225
  • golang.org/x/mod v0.16.0
  • golang.org/x/sync v0.6.0
  • golang.org/x/sys v0.18.0
  • google.golang.org/grpc v1.62.1
  • google.golang.org/protobuf v1.33.0
  • sigs.k8s.io/yaml v1.4.0
  • github.com/accuknox/auto-policy-discovery/src v0.0.0-20230912162532-0b5b73425c5a@0b5b73425c5a
  • github.com/charmbracelet/bubbles v0.17.1
  • github.com/charmbracelet/bubbletea v0.25.0
  • github.com/charmbracelet/lipgloss v0.9.1
  • github.com/deckarep/golang-set/v2 v2.6.0
  • github.com/evertras/bubble-table v0.15.6
  • github.com/google/go-cmp v0.6.0
  • github.com/google/go-github v17.0.0+incompatible
  • github.com/kubearmor/KVMService/src/types v0.0.0-20220714130113-b0eba8c9ff34@b0eba8c9ff34
  • github.com/kubearmor/KubeArmor/KubeArmor v0.0.0-20240313131335-9ae900daa38d@9ae900daa38d
  • github.com/kubearmor/KubeArmor/deployments v0.0.0-20240313131335-9ae900daa38d@9ae900daa38d
  • github.com/kubearmor/KubeArmor/pkg/KubeArmorController v0.0.0-20240313131335-9ae900daa38d@9ae900daa38d
  • github.com/kubearmor/KubeArmor/pkg/KubeArmorOperator v0.0.0-20240313131335-9ae900daa38d@9ae900daa38d
  • github.com/onsi/ginkgo/v2 v2.14.0
  • github.com/onsi/gomega v1.30.0
  • helm.sh/helm/v3 v3.14.3
  • k8s.io/api v0.29.2
  • k8s.io/apiextensions-apiserver v0.29.2
  • k8s.io/apimachinery v0.29.2
  • k8s.io/cli-runtime v0.29.0
  • k8s.io/client-go v0.29.2
html
recommend/report/html/header.html
  • jquery 3.7.1

  • Check this box to trigger a request for Renovate to run again on this repository

Integration Tests

kArmor only has basic CI checks and unit tests for checking logs module. We should have CI for individual modules and if they are working as expected, like leveraging install to install KubeArmor on a sample cluster successfully and other modules accordingly.

We already tests somewhat of our modules in KubeArmor Ginkgo tests, but they only test released version of kArmor and not being check on each update.
Also, this would give us more confidence while reviewing/merging pull requests as well.

Specific scenarios to consider:

  • test karmor recommend for a namespace that does not have any deployments
  • test karmor profile

Allow karmor to verify the existence of namespace before watching for alerts

karmor log is starting to watch for alerts even if the namespace doesn't exist in the cluster.

$ kubectl get ns
NAME              STATUS   AGE
default           Active   9d
kube-node-lease   Active   9d
kube-public       Active   9d
kube-system       Active   9d
wordpress-mysql   Active   6m9s

$ karmor log --namespace wordpress-demo
gRPC server: localhost:32767
Created a gRPC client (localhost:32767)
Checked the liveness of the gRPC server
Started to watch alerts
  • In the example above, there is no namespace with name wordpress-demo but karmor is looking for alerts in this case.

Expected Behaviour

  • karmor should check for namespaces first and only after post verification, it should look for alerts if the namespace exist.

Check for node OS type before installing kubearmor resources

Problem statement

karmor does not currently check if all nodes in a cluster are based on ubuntu.
under the current implementation, karmor will deploy cert-manager and annotation controller even though the cluster nodes are based on centos or non-ubuntu-based OSes.

users who uses non-ubuntu-based (SeLinux enforcer) clusters would not understand why they need to deploy cert-manager (including multiple k8s resources such as services, pods, roles, and a service account) and the admission controller (including multiple k8s resources such as pods, services, configmap, and a service account) which have no actual use in the clusters while consuming computing resources.

Suggestion:

Before installing needed resouces, karmor should check if the cluster enforcer is apparmor ( via OS type: debian, ubuntu, ...). if no karmor should skip the annotation controller isntalation.

credit: @nam-jaehyun

More details: kubearmor/KubeArmor#671 (comment)

SELinux configuration

Currently we default to AppArmor based deploment in any environment.
We should also autodetect the underlying LSM in node and install based on that.

karmor sysdump collects profile only for one node

When karmor sysdump is run, this collects the profile data for only one node.
Also when the kubeamor pod with first node is not active, then karmor sysdump throws an error and returns.

image

Expected Behavior
karmor sysdump to collect profile information from all nodes.

option names for karmor

karmor supports similar options in multiple commands, however the option names are different for different commands.

  • option for system|network, cilium|kubearmor ... currently karmor discover uses --policy cilium|kubearmor while karmor insight uses --source system|network ... Ideally we should have similar option name across these commands such as --class application | network
  • in karmor insight the option supported is --rule string ... However, there is no example for how to use this option.
  • better to use --grpc in place of --gRPC
  • karmor insight uses --type field to specify ingress/egress rule filtering. This should be called --ruletype. Btw, --rule and --type seems overlapping.
❯ karmor discover --help
Discover applicable policies

Usage:
  karmor discover [flags]

Flags:
  -c, --clustername string   Filter by Clustername
  -f, --format string        Format: json or yaml (default "json")
  -s, --fromsource string    Filter by policy FromSource
      --gRPC string          gRPC server information
  -h, --help                 help for discover
  -l, --labels string        Filter by policy Label
  -n, --namespace string     Filter by Namespace
  -p, --policy string        Type of policies to be discovered: cilium or kubearmor (default "kubearmor")
❯ karmor insight --help
Policy insight from discovery engine

Usage:
  karmor insight [flags]

Flags:
      --clustername string     Filter according to the Cluster name
      --containername string   Filter according to the Container name
      --fromsource string      Filter according to the source path
      --gRPC string            gRPC server information
  -h, --help                   help for insight
      --labels string          Labels for resources
  -n, --namespace string       Namespace for resources
      --rule string            NW packet Rule
      --source string          The DB for insight : system|network|all (default "all")
      --type string            NW packet type : ingress|egress

discovery-engine installation as part of karmor install

Discovery-engine is an observability and automated security posture identification engine that works on top of with kubearmor visibility.

Currently, the installation of discovery-engine has to be handled separately. It would be good to install discovery-engine as part of karmor install.

Note that the deployment of the discovery-engine has to happen using the existing deployment yaml of the discovery-engine without having to rewrite deployment config in kubearmor-client.

kArmor probe utility

Utility to analyse configuration and supported KubeArmor features in the current environment

List of things to probe into

  • Host Security Polices

  • Pod Security Policies

  • Audit Mode

  • Enforcement Mode

    • AppArmor
    • SELinux
    • BPF-LSM
  • Need a probe-check to see if the kernel-headers are present in the host (ref, Issue module kheaders not found in modules.dep Unable to find kernel headers.)

  • Check Pods being handled about KubeArmor

  • Check Which policies are applied to which pods!

extending filtering options for karmor

Following filtering options are needed with karmor:

  • --since=1h
  • --namespace=default
  • --log=hostlog/containerlog
  • --operation=process/file/network
  • --limit=n ... where n is a positive integer
  • use regex for filtering the data whereever strings are applicable
  • using label filters (-l | --selector): E.g. karmor log --logFilter all --json --selector "app: checkoutservice,name=xyz" --selector "app: emailservice" ... Check if we can use the regex filters as well.

Sample:

karmor log --namespace "explorer\|default"
karmor log --namespace "expl.*"

Syntax should be similar to k8s kubectl syntax where ever applicable.

Label filter

karmor log --logFilter all --json --selector "app: checkoutservice,name=xyz" --selector "app: emailservice"
if multiple --selector | -l options are present they should be considered as or clause.

Invalid apply of ksp - problem with using selector labels

Bug Report

Selector labels construct in the policy tells the kubearmor which pods the corresponding policy has to be applied on.

Consider wordpress-mysql example:

kubectl apply -f policy.yaml

apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: ksp-wordpress-block-process
  namespace: wordpress-mysql
spec:
  severity: 3
  selector:
    matchLabels:
      app: wordpress
      env: production
  process:
    matchPaths:
    - path: /usr/bin/apt
    - path: /usr/bin/apt-get
  action:
    Block

Note that the wordpress deployment does not have the label env: production but still the policy is applied as shown by karmor probe.

Armored Up pods : 
+-----------------+---------------------------------+-----------------------------+
|    NAMESPACE    |              NAME               |           POLICY            |
+-----------------+---------------------------------+-----------------------------+
| wordpress-mysql | mysql-58cdf6ccf-z9hpc           |                             |
+                 +---------------------------------+-----------------------------+
|                 | wordpress-bf95888cb-56x8j       | ksp-wordpress-block-process |
+-----------------+---------------------------------+-----------------------------+

To Reproduce

  1. apply the policy.yaml
  2. check karmor probe

Expected behavior

karmor probe should not show policy applied in the context of wordpress pods.

Remove extra spaces during karmor install

When we run karmor install then at some places we have extra spaces.

$ karmor install 
πŸ˜„  Auto Detected Environment : generic                                                    
πŸ”₯  CRD kubearmorpolicies.security.kubearmor.com                                           
πŸ”₯  CRD kubearmorhostpolicies.security.kubearmor.com                                       
πŸ’«  Service Account                                                                        
βš™οΈ   Cluster Role Bindings                                                                 
πŸ›‘   KubeArmor Relay Service                                                               
πŸ›°   KubeArmor Relay Deployment                                                            
πŸ›‘   KubeArmor DaemonSetkubearmor/kubearmor:stable-gRPC=32767 -logPath=/tmp/kubearmor.log -enableKubeArmorHostPolicy  
🧐  KubeArmor Policy Manager Service                                                       
πŸ€–  KubeArmor Policy Manager Deployment                                                    
πŸ˜ƒ  KubeArmor Host Policy Manager Service                                                  
πŸ›‘   KubeArmor Host Policy Manager Deployment                                              
πŸ›‘   KubeArmor Annotation Controller TLS certificates                                      
πŸš€  KubeArmor Annotation Controller Deployment                                             
πŸš€  KubeArmor Annotation Controller Service                                                
🀩  KubeArmor Annotation Controller Mutation Admission Registration                        
πŸ₯³  Done Installing KubeArmor                                                              
πŸ₯³  Done Checking , ALL Services are running!                                                                                           
⌚️  Execution Time : 1m12.623688116s
  • Remove extra spaces so that it looks more uniform.

PID/HostPID and PPID/HostPPID values getting printed in e-notation

== Alert / 2022-08-19 04:27:34.376499 ==
ClusterName: default
HostName: gke-core-trial-core-trail-pool-c7857dc6-nlgl
NamespaceName: demo-app
PodName: emailservice-764655647c-t84h6
Labels: app=emailservice
ContainerName: server
ContainerID: e997fdf138710f366726a6bf9fda5caff29aa4c11771349ae5860289f3dc11f3
ContainerImage: docker.io/knoxuser/emailservice:latest@sha256:09d9aace64450c47c24d7b99b4c90772cf14910142efc5346d065657b6fd8929
Type: MatchedPolicy
PolicyName: DefaultPosture
Source: /bin/grpc_health_probe -addr=:8080
Resource: /bin/grpc_health_probe
Operation: File
Action: Block
Data: syscall=SYS_OPENAT fd=-100 flags=O_RDONLY|O_CLOEXEC
Enforcer: eBPF Monitor
Result: Permission denied
HostPID: 3.586496e+06
HostPPID: 2.949047e+06
PID: 171927
PPID: 2.949047e+06
ParentProcessName: /usr/bin/containerd-shim-runc-v2
ProcessName: /bin/grpc_health_probe

PID/PPID, HostPID/HostPPID needs to be printed in normal integer notation.

kArmor Planning

Meta Issue for Planning/Discussing kArmor Features

Feature Set:-

  • Log Observer
    • Custom GRPC info
    • JSON format
    • Filter alerts/logs
  • Install
    • Auto Detect Environment
    • CRDs
    • Other Resources including Service Accounts, Services, Deployments, Daemonset
    • Verify Installation
    • Add custom flags
      • Version
    • Environments
      • Self Hosted
        • Docker
        • Containerd
      • Microk8s
      • Minikube
      • GCP
      • EKS
  • Uninstall
    • Uninstall based on resource names
    • Verify Uninstallation
  • Status
    • Check Installation
    • Aggregate Errors
    • #9
  • Completion
  • Usage
  • CRUD operations for policies
  • Release
    • Dockerize
    • GoReleaser for binary release
    • Installation script
      • Script runnable through curl
  • CI Workflows
    • Lint, Fmt, Gosec
    • Tagged releases
  • Make CLI Jazzy πŸš€
    • Add Emojis
    • Spinners/Animations
  • Testing
    • Github Workflows for checking exit code on various k8s environments
    • Smoke Test KubeArmor?

karmor install does not work with minikube

Using karmor install for installing kubearmor causes it to be stuck in "ContainerCreating" state.

Events:
  Type     Reason       Age                   From               Message
  ----     ------       ----                  ----               -------
  Normal   Scheduled    4m46s                 default-scheduler  Successfully assigned kube-system/kubearmor-7ppqr to minikube
  Warning  FailedMount  2m43s                 kubelet            Unable to attach or mount volumes: unmounted volumes=[docker-sock-path], unattached volumes=[os-release-path docker-sock-path docker-storage-path kube-api-access-6244l usr-src-path lib-modules-path sys-fs-bpf-path sys-kernel-debug-path]: timed out waiting for the condition
  Warning  FailedMount  36s (x10 over 4m46s)  kubelet            MountVolume.SetUp failed for volume "docker-sock-path" : hostPath type check failed: /var/docker/docker.sock is not a socket file
  Warning  FailedMount  28s                   kubelet            Unable to attach or mount volumes: unmounted volumes=[docker-sock-path], unattached volumes=[usr-src-path lib-modules-path sys-fs-bpf-path sys-kernel-debug-path os-release-path docker-sock-path docker-storage-path kube-api-access-6244l]: timed out waiting for the condition

If the corresponding yaml for minikube is used to install karmor then kubearmor installation is completed (however there is problem with kubearmor wherein it goes in CrashLoopBackOff after containers are created. See kubearmor/KubeArmor#514)

Command Enhancements

karmor currently supports commands for both K8S and VM cases.
Need to additional commands to enhance the same.

  • #70
  • systemd installation for VM

If any more suggestions, please add in the above list

Add more logger to "karmor recommend -l <some-label>" where docker is not installed on client machine

Feature Request

Need more logger in case of a client who doesn't have docker installed on his local while using flags of karmor recommend

Right now it's throwing me error like this

➜  ~ karmor recommend -l mysql
INFO[0000] pulling image                                 image="kubearmor/kubearmor-relay-server:latest"
FATA[0000] could not pull image                          error="Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"

Expected

  • As a user, I should be showing what prerequisites are missing, in this scenario it's docker

Tested this on:

  • Ubuntu VM with k3s
  • Node Version: v1.22.6+k3s1
  • OS: Ubuntu 20.04.4 LTS
  • Kernel: 5.15.0-1017-gcp
  • Runtime: containerd://1.5.9-k3s1

`sysdump` crashes if no KubeArmor running

Running sysdump utility with no KubeArmor running in k8s causes a runtime error.
image

Possible Solution
We probably need to validate if there are any items in context of KubeArmor before we use it

Name(pods.Items[0].Name).

List of tasks

  • sysdump should work even if kubearmor is not installed
  • sysdump should work in vagrant dev env
  • sysdump should work even if kubearmor is not working (i.e., installed but going into CrashLoopBackOff for some reasons)
  • sysdump should work in systemd mode as well

Hi, If it is the first time that you contribute to KubeArmor, follow these steps:
Write a comment in this issue thread to let other possible contributors know that you are working on this bug. For eg : Hey all, I would like to work on this issue., checkout Contributing Guide πŸ”₯✨ and feel free to ask anything related to this issue in this thread or on our slack channel ✌🏽

handling k8s port-forwarding internally

Currently, when someone uses karmor log or karmor discover or karmor summary, they get thrown an error message to do port-forwarding.

Wondering if we should do port-forwarding internally in kubearmor-client without throwing an error.

Cases to handle:

  • if the port-forward already exist to the 32767 (kubearmor-relay) or 9089 (discovery-engine) ... just proceed as usual
  • if port-forward does not exist then create a port-forward routine with unused port (between a range, 32768-32900)
  • handle it for kubearmor-relay port
  • handling it for discovery-engine port

Ref for port-forwarding programmatically.

runtime serviceacount recommended policies are incorrectly formed

apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
  name: wordpress-wordpress-4-8-apache-block-serviceaccount-runtime
  namespace: wordpress-mysql
spec:
  action: Block
  file:
    matchDirectories:
    - dir: /var/run/secrets/kubernetes.io/serviceaccount/
      recursive: true
    - dir: /run/secrets/kubernetes.io/serviceaccount/
      recursive: true
  message: serviceaccount access blocked
  selector:
    matchLabels:
      app: wordpress
  severity: 1
  tags:
  - KUBERNETES
  - SERVICE ACCOUNT
  - RUNTIME POLICY

Why are there two directories for service account in the policy?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.