kubearmor / kubearmor-client Goto Github PK
View Code? Open in Web Editor NEWKubeArmor cli tool aka kArmor :robot:
License: Apache License 2.0
KubeArmor cli tool aka kArmor :robot:
License: Apache License 2.0
karmor probe
should show if the host visibility for KubeArmor is enabled or not
This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.
These updates are currently rate-limited. Click on a checkbox below to force their creation now.
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
k8s.io/api
, k8s.io/apiextensions-apiserver
, k8s.io/apimachinery
, k8s.io/cli-runtime
, k8s.io/client-go
).github/workflows/broken-link-check.yml
celinekurpershoek/link-checker v1.0.2
.github/workflows/ci-ginkgo-test.yml
actions/checkout v2
actions/setup-go v5
helm/kind-action v1.9.0
.github/workflows/ci-go.yml
actions/checkout v2
actions/setup-go v5
actions/checkout v2
actions/setup-go v5
actions/checkout v2
actions/setup-go v5
actions/checkout v2
morphy2k/revive-action v2
actions/checkout v2
actions/setup-go v5
.github/workflows/codeql-analysis.yml
actions/checkout v2
github/codeql-action v3
actions/setup-go v5
github/codeql-action v3
github/codeql-action v3
.github/workflows/release.yml
actions/checkout v2
actions/setup-go v5
goreleaser/goreleaser-action v2
.github/workflows/scorecard.yml
actions/checkout v4.1.1@b4ffde65f46336ab88eb53be808477a3936bae11
ossf/scorecard-action v2.3.1@0864cf19026789058feabb7e87baa5f140aac736
actions/upload-artifact v3@97a0fba1372883ab732affbe8f94b823f91727db
github/codeql-action v3.24.9@1b1aada464948af03b950897e5eb522f92603cc2
go.mod
go 1.21.0
go 1.21.9
github.com/blang/semver v3.5.1+incompatible
github.com/cilium/cilium v1.14.5
github.com/clarketm/json v1.17.1
github.com/docker/docker v25.0.5+incompatible
github.com/fatih/color v1.16.0
github.com/json-iterator/go v1.1.12
github.com/kubearmor/KubeArmor/protobuf v0.0.0-20240313131335-9ae900daa38d@9ae900daa38d
github.com/mholt/archiver/v3 v3.5.1
github.com/moby/term v0.5.0
github.com/olekukonko/tablewriter v0.0.5
github.com/rhysd/go-github-selfupdate v1.2.3
github.com/rs/zerolog v1.29.1
github.com/sirupsen/logrus v1.9.3
github.com/spf13/cobra v1.8.0
golang.org/x/exp v0.0.0-20240222234643-814bf88cf225@814bf88cf225
golang.org/x/mod v0.16.0
golang.org/x/sync v0.6.0
golang.org/x/sys v0.18.0
google.golang.org/grpc v1.62.1
google.golang.org/protobuf v1.33.0
sigs.k8s.io/yaml v1.4.0
github.com/accuknox/auto-policy-discovery/src v0.0.0-20230912162532-0b5b73425c5a@0b5b73425c5a
github.com/charmbracelet/bubbles v0.17.1
github.com/charmbracelet/bubbletea v0.25.0
github.com/charmbracelet/lipgloss v0.9.1
github.com/deckarep/golang-set/v2 v2.6.0
github.com/evertras/bubble-table v0.15.6
github.com/google/go-cmp v0.6.0
github.com/google/go-github v17.0.0+incompatible
github.com/kubearmor/KVMService/src/types v0.0.0-20220714130113-b0eba8c9ff34@b0eba8c9ff34
github.com/kubearmor/KubeArmor/KubeArmor v0.0.0-20240313131335-9ae900daa38d@9ae900daa38d
github.com/kubearmor/KubeArmor/deployments v0.0.0-20240313131335-9ae900daa38d@9ae900daa38d
github.com/kubearmor/KubeArmor/pkg/KubeArmorController v0.0.0-20240313131335-9ae900daa38d@9ae900daa38d
github.com/kubearmor/KubeArmor/pkg/KubeArmorOperator v0.0.0-20240313131335-9ae900daa38d@9ae900daa38d
github.com/onsi/ginkgo/v2 v2.14.0
github.com/onsi/gomega v1.30.0
helm.sh/helm/v3 v3.14.3
k8s.io/api v0.29.2
k8s.io/apiextensions-apiserver v0.29.2
k8s.io/apimachinery v0.29.2
k8s.io/cli-runtime v0.29.0
k8s.io/client-go v0.29.2
recommend/report/html/header.html
jquery 3.7.1
karmor does not currently check if all nodes in a cluster are based on ubuntu.
under the current implementation, karmor will deploy cert-manager and annotation controller even though the cluster nodes are based on centos or non-ubuntu-based OSes.
users who uses non-ubuntu-based (SeLinux enforcer) clusters would not understand why they need to deploy cert-manager (including multiple k8s resources such as services, pods, roles, and a service account) and the admission controller (including multiple k8s resources such as pods, services, configmap, and a service account) which have no actual use in the clusters while consuming computing resources.
Before installing needed resouces, karmor should check if the cluster enforcer is apparmor ( via OS type: debian, ubuntu, ...). if no karmor should skip the annotation controller isntalation.
credit: @nam-jaehyun
More details: kubearmor/KubeArmor#671 (comment)
karmor discover --type network
kubearmor-annotation-manager was added as part of v0.5 release of kubearmor.
CC: @achrefbensaad
CentOS 8.5 (kernel 4.18) contains BPF-LSM as enforcer and support both observability and enforement
Running karmor probe
without KubeArmor installed:
Host:
Observability/Audit: Supported (Kernel Version 4.18.0)
Enforcement: Partial (Supported LSMs: capability,yama,selinux,bpf)
To have full enforcement support, apparmor must be supported
Expected
Enforcement: Full
We should also check if bpf as a LSM is available for enforcment
== Alert / 2022-08-19 04:27:34.376499 ==
ClusterName: default
HostName: gke-core-trial-core-trail-pool-c7857dc6-nlgl
NamespaceName: demo-app
PodName: emailservice-764655647c-t84h6
Labels: app=emailservice
ContainerName: server
ContainerID: e997fdf138710f366726a6bf9fda5caff29aa4c11771349ae5860289f3dc11f3
ContainerImage: docker.io/knoxuser/emailservice:latest@sha256:09d9aace64450c47c24d7b99b4c90772cf14910142efc5346d065657b6fd8929
Type: MatchedPolicy
PolicyName: DefaultPosture
Source: /bin/grpc_health_probe -addr=:8080
Resource: /bin/grpc_health_probe
Operation: File
Action: Block
Data: syscall=SYS_OPENAT fd=-100 flags=O_RDONLY|O_CLOEXEC
Enforcer: eBPF Monitor
Result: Permission denied
HostPID: 3.586496e+06
HostPPID: 2.949047e+06
PID: 171927
PPID: 2.949047e+06
ParentProcessName: /usr/bin/containerd-shim-runc-v2
ProcessName: /bin/grpc_health_probe
PID/PPID, HostPID/HostPPID needs to be printed in normal integer notation.
karmor supports similar options in multiple commands, however the option names are different for different commands.
karmor discover
uses --policy cilium|kubearmor
while karmor insight
uses --source system|network
... Ideally we should have similar option name across these commands such as --class application | network
karmor insight
the option supported is --rule string
... However, there is no example for how to use this option.--grpc
in place of --gRPC
karmor insight
uses --type
field to specify ingress/egress rule filtering. This should be called --ruletype
. Btw, --rule
and --type
seems overlapping.โฏ karmor discover --help
Discover applicable policies
Usage:
karmor discover [flags]
Flags:
-c, --clustername string Filter by Clustername
-f, --format string Format: json or yaml (default "json")
-s, --fromsource string Filter by policy FromSource
--gRPC string gRPC server information
-h, --help help for discover
-l, --labels string Filter by policy Label
-n, --namespace string Filter by Namespace
-p, --policy string Type of policies to be discovered: cilium or kubearmor (default "kubearmor")
โฏ karmor insight --help
Policy insight from discovery engine
Usage:
karmor insight [flags]
Flags:
--clustername string Filter according to the Cluster name
--containername string Filter according to the Container name
--fromsource string Filter according to the source path
--gRPC string gRPC server information
-h, --help help for insight
--labels string Labels for resources
-n, --namespace string Namespace for resources
--rule string NW packet Rule
--source string The DB for insight : system|network|all (default "all")
--type string NW packet type : ingress|egress
karmor currently supports commands for both K8S and VM cases.
Need to additional commands to enhance the same.
If any more suggestions, please add in the above list
Ref kubearmor/KubeArmor#485
Follow up kubearmor/KubeArmor#486
Current karmor install
command looks bleak, we need to jazz up the installation experience with some ๐ ๐ค ๐ธ emojis and perhaps animations as well.
We currently just instantiate kubernetes api calls and leave checking the status of the installation to the user. We should wait for the various installations to be successful and only exit once kubearmor is running. This wait time also gives us some room to play animations for a better experience ๐
Work Items
install
subcommandโ ~ karmor summary
Error: rpc error: code = Unimplemented desc = unknown service v1.observability.Observability
When I run karmor summary
command that is the output.
Also port-forward is working fine.
โ ~ kubectl port-forward -n explorer service/knoxautopolicy --address 0.0.0.0 --address :: 9089:9089 &
[1] 25901
โ ~ Forwarding from 0.0.0.0:9089 -> 9089
Forwarding from [::]:9089 -> 9089
Handling connection for 9089
But I still get this error.
Following filtering options are needed with karmor:
--since=1h
--namespace=default
--log=hostlog/containerlog
--operation=process/file/network
--limit=n
... where n is a positive integer-l | --selector
): E.g. karmor log --logFilter all --json --selector "app: checkoutservice,name=xyz" --selector "app: emailservice"
... Check if we can use the regex filters as well.Sample:
karmor log --namespace "explorer\|default"
karmor log --namespace "expl.*"
Syntax should be similar to k8s kubectl syntax where ever applicable.
karmor log --logFilter all --json --selector "app: checkoutservice,name=xyz" --selector "app: emailservice"
if multiple --selector | -l
options are present they should be considered as or
clause.
Discovery-engine is an observability and automated security posture identification engine that works on top of with kubearmor visibility.
Currently, the installation of discovery-engine has to be handled separately. It would be good to install discovery-engine as part of karmor install
.
Note that the deployment of the discovery-engine has to happen using the existing deployment yaml of the discovery-engine without having to rewrite deployment config in kubearmor-client.
We currently don't terminate our log watchers once we get an EOF from KubeArmor, We need to manually send a SIGKILL or any other relevant signal to stop the process. We should auto exit the watcher and terminate the process once we receive this.
Hi, If it is the first time that you contribute to KubeArmor, follow these steps:
Write a comment in this issue thread to let other possible contributors know that you are working on this bug. For eg : Hey all, I would like to work on this issue.
, checkout Contributing Guide ๐ฅโจ and feel free to ask anything related to this issue in this thread or on our slack channel โ๐ฝ
Recent cilium dep introduced in #45 is causing issues cross compiling kArmor for the Windows platform.
Error log can be found at https://github.com/kubearmor/kubearmor-client/runs/5549174379?check_suite_focus=true
To reproduce the issue run:
env GOOS=windows CGO_ENABLED=0 go build -o karmor
Current Ginkgo test for karmor recommend
is only checking for the creation of policy files.
Run karmor as part of cluster to download the installation script for per vm configured in kvmsoperator.
This can be run n times to download the script.
The script takes the configured vm name as argument and returns back the script in file specified.
/etc/os-release
and provide additional preconditions options for distros and rules.apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
name: wordpress-wordpress-4-8-apache-block-serviceaccount-runtime
namespace: wordpress-mysql
spec:
action: Block
file:
matchDirectories:
- dir: /var/run/secrets/kubernetes.io/serviceaccount/
recursive: true
- dir: /run/secrets/kubernetes.io/serviceaccount/
recursive: true
message: serviceaccount access blocked
selector:
matchLabels:
app: wordpress
severity: 1
tags:
- KUBERNETES
- SERVICE ACCOUNT
- RUNTIME POLICY
Why are there two directories for service account in the policy?
karmor version: 0.9.5
INFO[0045] dumped image to tar tar=/tmp/karmor2162624332/wbdoJOYh.tar
created policy out/kubernetes-dashboard-kubernetes-dashboard/kubernetesui-dashboard-v2.6.1-password-protect.yaml ...
INFO[0046] pulling image image="quay.io/cilium/operator-generic:v1.11.3@sha256:5b81db7a32cb7e2d00bb3cf332277ec2b3be239d9e94a8d979915f4e6648c787"
quay.io/cilium/operator-generic@sha256:5b81db7a32cb7e2d00bb3cf332277ec2b3be239d9e94a8d979915f4e6648c787: Pulling from cilium/operator-generic
bc877eec10d7: Pull complete
78ea17f4e2e5: Pull complete
508c65bb69fc: Pull complete
34887728791f: Pull complete
Digest: sha256:5b81db7a32cb7e2d00bb3cf332277ec2b3be239d9e94a8d979915f4e6648c787
Status: Downloaded newer image for quay.io/cilium/operator-generic@sha256:5b81db7a32cb7e2d00bb3cf332277ec2b3be239d9e94a8d979915f4e6648c787
INFO[0055] dumped image to tar tar=/tmp/karmor1431945542/JspJTTYI.tar
panic: interface conversion: interface {} is nil, not []interface {}
goroutine 1 [running]:
github.com/kubearmor/kubearmor-client/recommend.(*ImageInfo).readManifest(0xc00090c210, {0xc000834000?, 0xc000212b00?})
/home/runner/work/kubearmor-client/kubearmor-client/recommend/imageHandler.go:280 +0x838
github.com/kubearmor/kubearmor-client/recommend.(*ImageInfo).getImageInfo(0xc00090c210)
/home/runner/work/kubearmor-client/kubearmor-client/recommend/imageHandler.go:387 +0x22d
github.com/kubearmor/kubearmor-client/recommend.getImageDetails({0xc0004554b0, 0xb}, {0xc000455490, 0xf}, 0xc000161ad0, {0xc000d79b90, 0x6f})
/home/runner/work/kubearmor-client/kubearmor-client/recommend/imageHandler.go:407 +0x1cd
github.com/kubearmor/kubearmor-client/recommend.imageHandler({0xc0004554b0, 0xb}, {0xc000455490, 0xf}, 0x4172d3?, {0xc000d79b90, 0x6f})
/home/runner/work/kubearmor-client/kubearmor-client/recommend/imageHandler.go:423 +0x1ae
github.com/kubearmor/kubearmor-client/recommend.handleDeployment({{0xc000455490, 0xf}, {0xc0004554b0, 0xb}, 0xc000161ad0, {0xc00040ffb0, 0x1, 0x1}})
/home/runner/work/kubearmor-client/kubearmor-client/recommend/recommend.go:153 +0x1a6
github.com/kubearmor/kubearmor-client/recommend.Recommend(0xc0005a7400, {{0x3071230, 0x0, 0x0}, {0x3071230, 0x0, 0x0}, {0x0, 0x0, 0x0}, ...})
/home/runner/work/kubearmor-client/kubearmor-client/recommend/recommend.go:136 +0x3fc
github.com/kubearmor/kubearmor-client/cmd.glob..func11(0x301c380?, {0x1de561f?, 0x0?, 0x0?})
/home/runner/work/kubearmor-client/kubearmor-client/cmd/recommend.go:19 +0x58
github.com/spf13/cobra.(*Command).execute(0x301c380, {0x3071230, 0x0, 0x0})
/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:856 +0x67c
github.com/spf13/cobra.(*Command).ExecuteC(0x301c600)
/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:974 +0x3b4
github.com/spf13/cobra.(*Command).Execute(...)
/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:902
github.com/kubearmor/kubearmor-client/cmd.Execute()
/home/runner/work/kubearmor-client/kubearmor-client/cmd/root.go:49 +0x25
main.main()
/home/runner/work/kubearmor-client/kubearmor-client/main.go:10 +0x17
Currently we default to AppArmor based deploment in any environment.
We should also autodetect the underlying LSM in node and install based on that.
Dump List:
Utility to analyse configuration and supported KubeArmor features in the current environment
List of things to probe into
Host Security Polices
Pod Security Policies
Audit Mode
Enforcement Mode
Need a probe-check to see if the kernel-headers are present in the host (ref, Issue module kheaders not found in modules.dep Unable to find kernel headers
.)
Check Pods being handled about KubeArmor
Check Which policies are applied to which pods!
karmor report --image "homeassistant:latest" --yamldir "policies" --output report.pdf
... yamldir will contain the set of recommended policies
/usr/share/ca-certificates
/etc/ssl/
recursive: true
Block process execution
/usr/sbin/update-ca-certificates
namespace
deployment/workload
Application
1. Description: audit access to /sbin/
Reason: sbin contains maintainance tools .... (tooltip)
...
3.
karmor report --k8s-manifest deployment.yaml
karmor report -n namespace -l "app=wordpress"
future
json => html/markdown/stdout => pdf
Look at the output of https://github.com/aquasecurity/trivy
[How do we want to structure the final report?]
[common way to report an output]
Reporting API
[image scanner]
[k8s scanner]
[runtime scanner]
Meta Issue for Planning/Discussing kArmor Features
Feature Set:-
kArmor only has basic CI checks and unit tests for checking logs
module. We should have CI for individual modules and if they are working as expected, like leveraging install
to install KubeArmor on a sample cluster successfully and other modules accordingly.
We already tests somewhat of our modules in KubeArmor Ginkgo tests, but they only test released version of kArmor and not being check on each update.
Also, this would give us more confidence while reviewing/merging pull requests as well.
Specific scenarios to consider:
karmor recommend
for a namespace that does not have any deploymentskarmor profile
karmor log
is starting to watch for alerts even if the namespace doesn't exist in the cluster.
$ kubectl get ns
NAME STATUS AGE
default Active 9d
kube-node-lease Active 9d
kube-public Active 9d
kube-system Active 9d
wordpress-mysql Active 6m9s
$ karmor log --namespace wordpress-demo
gRPC server: localhost:32767
Created a gRPC client (localhost:32767)
Checked the liveness of the gRPC server
Started to watch alerts
wordpress-demo
but karmor
is looking for alerts in this case.karmor
should check for namespaces first and only after post verification, it should look for alerts if the namespace exist.go 1.18 introduced a new debug/buildinfo
package which should eliminate the need of manually embedding version info using build flags.
Let's use it for our versioning information
Currently, karmor sysdump
works only for k8s deployments. It can also be made to work on systemd deployments:
journalctl
and push it to archiveKarmor currently install kubearmor by default in kube-system namespace.
There should be an option to use a different namespace.
When we run karmor install
then at some places we have extra spaces.
$ karmor install
๐ Auto Detected Environment : generic
๐ฅ CRD kubearmorpolicies.security.kubearmor.com
๐ฅ CRD kubearmorhostpolicies.security.kubearmor.com
๐ซ Service Account
โ๏ธ Cluster Role Bindings
๐ก KubeArmor Relay Service
๐ฐ KubeArmor Relay Deployment
๐ก KubeArmor DaemonSetkubearmor/kubearmor:stable-gRPC=32767 -logPath=/tmp/kubearmor.log -enableKubeArmorHostPolicy
๐ง KubeArmor Policy Manager Service
๐ค KubeArmor Policy Manager Deployment
๐ KubeArmor Host Policy Manager Service
๐ก KubeArmor Host Policy Manager Deployment
๐ก KubeArmor Annotation Controller TLS certificates
๐ KubeArmor Annotation Controller Deployment
๐ KubeArmor Annotation Controller Service
๐คฉ KubeArmor Annotation Controller Mutation Admission Registration
๐ฅณ Done Installing KubeArmor
๐ฅณ Done Checking , ALL Services are running!
โ๏ธ Execution Time : 1m12.623688116s
Currently, when someone uses karmor log
or karmor discover
or karmor summary
, they get thrown an error message to do port-forwarding.
Wondering if we should do port-forwarding internally in kubearmor-client without throwing an error.
Cases to handle:
Selector labels construct in the policy tells the kubearmor which pods the corresponding policy has to be applied on.
Consider wordpress-mysql example:
kubectl apply -f policy.yaml
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
name: ksp-wordpress-block-process
namespace: wordpress-mysql
spec:
severity: 3
selector:
matchLabels:
app: wordpress
env: production
process:
matchPaths:
- path: /usr/bin/apt
- path: /usr/bin/apt-get
action:
Block
Note that the wordpress deployment does not have the label env: production
but still the policy is applied as shown by karmor probe
.
Armored Up pods :
+-----------------+---------------------------------+-----------------------------+
| NAMESPACE | NAME | POLICY |
+-----------------+---------------------------------+-----------------------------+
| wordpress-mysql | mysql-58cdf6ccf-z9hpc | |
+ +---------------------------------+-----------------------------+
| | wordpress-bf95888cb-56x8j | ksp-wordpress-block-process |
+-----------------+---------------------------------+-----------------------------+
To Reproduce
karmor probe
Expected behavior
karmor probe
should not show policy applied in the context of wordpress pods.
Kubearmor supports policy enforcement and this enforcement is guided by two factors:
defaultPosture
, auditOnlyMode
etc)It is possible to put up a document to explain all the combination as in, if X is the policy and Y is the configuration and if we get an action exec/fopen/network-connect etc, how will the policy react? However, using a document for such purpose seems non-efficient. Also it can very easily get complicated since every configuration has multiple values (for e.g. defaultPosture
could be Audit/Block/Allow). As an example, check the discussion on this PR. It is better if we can simulate the event/action-result given the inputs. Later on, we can have a web-page hosted that can depict this in a more user friendly way.
karmor simulate --config kubearmor.cfg --policy policy.yaml --action "exec:/bin/sleep"
karmor simulate --config kubearmor.cfg --policy policy.yaml --action "exec:/bin/bash->/bin/sleep" ... specifying ls spawned as a child process of bash
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
name: ksp-group-1-proc-path-block
namespace: multiubuntu
spec:
selector:
matchLabels:
group: group-1
process:
matchPaths:
- path: /bin/sleep
action:
Block
Assuming the command used is:
karmor simulate --config kubearmor.cfg --policy policy.yaml --action "exec:/bin/sleep"
Expected Output:
Action: Block
Telemetry Event:
== Alert ==
Cluster Name: unknown
Host Name: unknown
Namespace Name: unknown
Pod Name: unknown
Container ID: unknown
Container Name: unknown
Labels: unknown
Policy: policy.yaml
Severity: 1
Type: MatchedPolicy
Source: /bin/sleep
Operation: Process
Resource: /bin/sleep
Data: syscall=SYS_EXECVE
Action: Block
Result: Permission denied
CC: @nam-jaehyun (was his idea)
karmor summary
is showing serviceaccount token access from knoxAutoPolicy
binary, but when used with --agg
flag the data is getting skipped
karmor summary -n explorer
Pod Name knoxautopolicy-8587dfd464-mrz6b
Namespace Name explorer
Cluster Name default
Container Name knoxautopolicy
Labels container=knoxautopolicy
File Data
+-----------------+---------------------------------------------------------------------------------+-------+------------------------------+--------+
| SRC PROCESS | DESTINATION FILE PATH | COUNT | LAST UPDATED TIME | STATUS |
+-----------------+---------------------------------------------------------------------------------+-------+------------------------------+--------+
| /knoxAutoPolicy | /accuknox-obs.db | 34 | Thu Oct 6 06:24:01 UTC 2022 | Allow |
| /knoxAutoPolicy | /run/secrets/kubernetes.io/serviceaccount/..2022_10_06_06_05_10.034039894/token | 10 | Thu Oct 6 06:23:41 UTC 2022 | Allow |
| /knoxAutoPolicy | /accuknox.db | 17 | Thu Oct 6 06:24:01 UTC 2022 | Allow |
+-----------------+---------------------------------------------------------------------------------+-------+------------------------------+--------+
Ingress connections
+----------+-----------------+------------+------+-----------+--------+
| PROTOCOL | COMMAND | POD/SVC/IP | PORT | NAMESPACE | LABELS |
+----------+-----------------+------------+------+-----------+--------+
| TCPv6 | /knoxAutoPolicy | 127.0.0.1 | 9089 | | |
+----------+-----------------+------------+------+-----------+--------+
Egress connections
+----------+-----------------+----------------+------+-----------+-----------------------------------------+
| PROTOCOL | COMMAND | POD/SVC/IP | PORT | NAMESPACE | LABELS |
+----------+-----------------+----------------+------+-----------+-----------------------------------------+
| TCP | /knoxAutoPolicy | svc/kubernetes | 443 | default | component=apiserver,provider=kubernetes |
+----------+-----------------+----------------+------+-----------+-----------------------------------------+
karmor summary -n explorer --agg
Pod Name knoxautopolicy-8587dfd464-mrz6b
Namespace Name explorer
Cluster Name default
Container Name knoxautopolicy
Labels container=knoxautopolicy
File Data
+-----------------+-----------------------+-------+------------------------------+--------+
| SRC PROCESS | DESTINATION FILE PATH | COUNT | LAST UPDATED TIME | STATUS |
+-----------------+-----------------------+-------+------------------------------+--------+
| /knoxAutoPolicy | | 61 | Thu Oct 6 06:24:01 UTC 2022 | Allow |
+-----------------+-----------------------+-------+------------------------------+--------+
Ingress connections
+----------+-----------------+------------+------+-----------+--------+
| PROTOCOL | COMMAND | POD/SVC/IP | PORT | NAMESPACE | LABELS |
+----------+-----------------+------------+------+-----------+--------+
| TCPv6 | /knoxAutoPolicy | 127.0.0.1 | 9089 | | |
+----------+-----------------+------------+------+-----------+--------+
Egress connections
+----------+-----------------+----------------+------+-----------+-----------------------------------------+
| PROTOCOL | COMMAND | POD/SVC/IP | PORT | NAMESPACE | LABELS |
+----------+-----------------+----------------+------+-----------+-----------------------------------------+
| TCP | /knoxAutoPolicy | svc/kubernetes | 443 | default | component=apiserver,provider=kubernetes |
+----------+-----------------+----------------+------+-----------+-----------------------------------------+
According to the help it should aggregate based on the destination files/folder
karmor summary -h
Discovery engine keeps the telemetry information from the policy enforcement engines and the karmor connects to it to provide this as observability data
Usage:
karmor summary [flags]
Flags:
--agg Aggregate destination files/folder path
karmor version
karmor version 0.9.9 linux/amd64 BuildDate=2022-09-29T06:37:07Z
current version is the latest
kubearmor image (running) version kubearmor/kubearmor:stable
kArmor log currently looks for KubeArmor at 32767 to to start streaming telemtry, but if the service is not port forwarded it fails and shows the following message
We should have an inbuilt functionality to mimic the functionality of kubectl port-forward added with discovering KubeArmor automatically and then forwarding the same service.
Running sysdump utility with no KubeArmor running in k8s causes a runtime error.
Possible Solution
We probably need to validate if there are any items in context of KubeArmor before we use it
kubearmor-client/sysdump/sysdump.go
Line 226 in f90ced4
Hi, If it is the first time that you contribute to KubeArmor, follow these steps:
Write a comment in this issue thread to let other possible contributors know that you are working on this bug. For eg : Hey all, I would like to work on this issue.
, checkout Contributing Guide ๐ฅโจ and feel free to ask anything related to this issue in this thread or on our slack channel โ๐ฝ
Currently, Kubearmor supports a default posture of block. This posture is configurable through command line parameters, but these parameters are not supported by karmor.
Requirement to support:
karmor install --defaultposture audit/block
Using karmor install
for installing kubearmor causes it to be stuck in "ContainerCreating" state.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m46s default-scheduler Successfully assigned kube-system/kubearmor-7ppqr to minikube
Warning FailedMount 2m43s kubelet Unable to attach or mount volumes: unmounted volumes=[docker-sock-path], unattached volumes=[os-release-path docker-sock-path docker-storage-path kube-api-access-6244l usr-src-path lib-modules-path sys-fs-bpf-path sys-kernel-debug-path]: timed out waiting for the condition
Warning FailedMount 36s (x10 over 4m46s) kubelet MountVolume.SetUp failed for volume "docker-sock-path" : hostPath type check failed: /var/docker/docker.sock is not a socket file
Warning FailedMount 28s kubelet Unable to attach or mount volumes: unmounted volumes=[docker-sock-path], unattached volumes=[usr-src-path lib-modules-path sys-fs-bpf-path sys-kernel-debug-path os-release-path docker-sock-path docker-storage-path kube-api-access-6244l]: timed out waiting for the condition
If the corresponding yaml for minikube is used to install karmor then kubearmor installation is completed (however there is problem with kubearmor wherein it goes in CrashLoopBackOff after containers are created. See kubearmor/KubeArmor#514)
When installing KubeArmor using karmor install
, the latest CRDs are not installed.
This right now leads to new rules added in policy file, like network protocol: raw
being unsupported.
go-lint is failing because it could not be completed in time. The default time of 1m is too low. Please check this yaml for updating the timeout.
Need more logger in case of a client who doesn't have docker installed on his local while using flags of karmor recommend
Right now it's throwing me error like this
โ ~ karmor recommend -l mysql
INFO[0000] pulling image image="kubearmor/kubearmor-relay-server:latest"
FATA[0000] could not pull image error="Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
Expected
Tested this on:
v1.22.6+k3s1
Ubuntu 20.04.4 LTS
5.15.0-1017-gcp
containerd://1.5.9-k3s1
Prepare rules.yaml based on policy-templates repo.
karmor recommend --update
karmor recommend
check whether latest policy-templates are available.Add ./karmor list-policy <containername>/<podname>/host
to view list of currently applied policies.
Currently we are backing up the policies file in /opt/kubearmor/policies
, they can be used to retrieve the applied policies.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.