alcideio / rbac-tool Goto Github PK
View Code? Open in Web Editor NEWRapid7 | insightCloudSec | Kubernetes RBAC Power Toys - Visualize, Analyze, Generate & Query
License: Apache License 2.0
Rapid7 | insightCloudSec | Kubernetes RBAC Power Toys - Visualize, Analyze, Generate & Query
License: Apache License 2.0
Hello,
for my setup I use multiple kubeconfig files a'la
kubectl --kubeconfig test-context.yaml get ns
or kubectl --kubeconfig dev-context.yaml get ns
, each defining own set of contexts. There might be aliases set a'la kubectltest
or kubectldev
to speed things up, as I use different contexts on regular basis. Reason for not putting them into single default kubeconfig is that clusters get regenerated sometimes and it is easier for me to download current config from Rancher in case of update rather then trying to merge them into single file (default homedir kubeconfig has configs for e.g. my local k8s context/cluster etc).
I am trying to use rbac-tool
and cannot combine it with specific kubeconfig. There is --cluster-context
cli switch, but it works inside current (default) contexts, and I want my specific config.
If I use kubectl --kubeconfig some.yaml rbac-tool viz
it says flags cannot be placed before plugin name: --kubeconfig
.
What am I doing wrong and how can I make it work?
What would you like to be added:
Json output, preferably in the following structure:
{
"User": "User",
"authorizedFor":
{
"objectName":"objectName",
"objectType":"objectType",
"Permission":"Permisson"
}
}
Why is this needed:
So it can be used in other systems to reflect permissions of users.
@gadinaor The screenshot was handcrafted; however, I've worked on this feature a bit today (from the master branch currently tagged at 1.2.1).
Unrelated: I noticed something a bit odd. The results of the lookup command are different in the 1.2.0 and the 1.2.1 .The 1.2.0 labels ClusterRoles as Roles when they are used on a namespaced (i.e. with a Rolebinding).
Is the change from 1.2.0 to 1.2.1 intentional or a regression?
What would you like to be added:
Add a flag to just generate RBAC for namespaced or clusterscoped resources
e.g. rbac-tool show --scope=cluster
or rbac-tool show --scope=namespace
Why is this needed:
to be able to just grant all possible rights for a specific namespace but prevent usage of those resources in other namespaces
This would allow for more granular usage of the generated roles
What happened:
I'm trying to run kubectl rbac-tool analyze
, but it is failing on all the rules that have allowedTo
in them (which is all of the default ones).
What you expected to happen:
I expect it to analyze without erroring.
How to reproduce it (as minimally and precisely as possible):
$ kubectl rbac-tool analyze
E0126 12:46:45.944039 30783 analysis.go:316] Failed to evaluate rule 'Secret Readers' - no such key: allowedTo
E0126 12:46:45.947001 30783 analysis.go:316] Failed to evaluate rule 'Workload Creators & Editors' - no such key: allowedTo
E0126 12:46:45.949413 30783 analysis.go:316] Failed to evaluate rule 'Identify Privileges Escalators - via impersonate' - no such key: allowedTo
E0126 12:46:45.952296 30783 analysis.go:316] Failed to evaluate rule 'Identify Privileges Escalators - via bind or escalate' - no such key: allowedTo
E0126 12:46:45.957127 30783 analysis.go:316] Failed to evaluate rule 'Storage & Data - Manipulate Cluster Shared Resources' - no such key: allowedTo
E0126 12:46:45.960619 30783 analysis.go:316] Failed to evaluate rule 'Networking - Manipulate Networking and Network Access related resources' - no such key: allowedTo
E0126 12:46:45.963584 30783 analysis.go:316] Failed to evaluate rule 'Installing or Modifying Admission Controllers' - no such key: allowedTo
E0126 12:46:45.967046 30783 analysis.go:316] Failed to evaluate rule 'Installing or Modifying Cluster Extensions (CRDs)' - no such key: allowedTo
E0126 12:46:45.973053 30783 analysis.go:316] Failed to evaluate rule 'Open Policy Agent (OPA) GateKeeper Administration' - no such key: allowedTo
AnalysisConfigInfo:
Description: Rapid7 InsightCloudSec default RBAC analysis rules
Name: InsightCloudSec
Uuid: 9371719c-1031-468c-91ed-576fdc9e9f59
CreatedOn: "2022-01-26T12:46:45+09:00"
Findings: []
Stats:
ExclusionCount: 0
RuleCount: 9
Anything else we need to know?:
I also tried it on a vanilla 1.22 cluster and it worked, so thinking this might be something with the cluster version. I'm not aware of a change to RBAC models in those versions that would cause this, but of course I might have missed something.
Environment:
kubectl version
):Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:33:37Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.11-eks-f17b81", GitCommit:"f17b810c9e5a82200d28b6210b458497ddfcf31b", GitTreeState:"clean", BuildDate:"2021-10-15T21:46:21Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.23) and server (1.20) exceeds the supported minor version skew of +/-1
What would you like to be added:
Why is this needed:
Reduce over permissive RBAC policies (star syndrome)
What happened:
I don't have access to psp. I ran viz with showpsp=false and still it failed with error about my lack of access to psp
What you expected to happen:
viz should work as normal for above scenario
How to reproduce it (as minimally and precisely as possible):
Use viz when you're user doesn't have psp access
Anything else we need to know?:
PR with fix here #51
Environment:
kubectl version
): 1.23.0 client, 1.21.10 serverWhat would you like to be added:
Add regex based lookup of *roles attached to subject (user/group/serviceaccount), by specifying the subject, and showing in a table the attached roles/clusterroles
What happened:
Rbac-tool fails to build for modern k8s versions bc pod security policy has been deprecated and removed from the k8s library
What you expected to happen:
Rbac-tool compiles when using support k8s library version
How to reproduce it (as minimally and precisely as possible):
Upgrade k8s client to v 1.28 and try to build
Anything else we need to know?:
Environment:
kubectl version
):In the command below and on a large cluster there can be a lot of ServiceAccount named default which have different permissions, the current policy-rules
command does not allow to know to which namespaces each of these service account belongs. Here is an example:
rbac-tool policy-rules -e default
TYPE | SUBJECT | VERBS | NAMESPACE | API GROUP | KIND | NAMES | NONRESOURCEURI | ORIGINATED FROM
-----------------+-------------------------------------------------+--------+-------------+---------------------+---------------------------------------+-----------------------------------------------------------------+----------------+---------------------------------------------------------------------------------
Group | system:bootstrappers:kubeadm:default-node-token | create | * | certificates.k8s.io | certificatesigningrequests | | | ClusterRoles>>system:node-bootstrapper
Group | system:bootstrappers:kubeadm:default-node-token | create | * | certificates.k8s.io | certificatesigningrequests/nodeclient | | | ClusterRoles>>system:certificates.k8s.io:certificatesigningrequests:nodeclient
Group | system:bootstrappers:kubeadm:default-node-token | get | * | certificates.k8s.io | certificatesigningrequests | | | ClusterRoles>>system:node-bootstrapper
Group | system:bootstrappers:kubeadm:default-node-token | get | * | core | nodes | | | ClusterRoles>>kubeadm:get-nodes
Group | system:bootstrappers:kubeadm:default-node-token | get | kube-system | core | configmaps | kube-proxy | | Roles>>kube-system/kube-proxy
Group | system:bootstrappers:kubeadm:default-node-token | get | kube-system | core | configmaps | kubeadm-config | | Roles>>kube-system/kubeadm:nodes-kubeadm-config
| | | | | | | |
Group | system:bootstrappers:kubeadm:default-node-token | get | kube-system | core | configmaps | kubelet-config | | Roles>>kube-system/kubeadm:kubelet-config
| | | | | | | |
Group | system:bootstrappers:kubeadm:default-node-token | list | * | certificates.k8s.io | certificatesigningrequests | | | ClusterRoles>>system:node-bootstrapper
Group | system:bootstrappers:kubeadm:default-node-token | watch | * | certificates.k8s.io | certificatesigningrequests | | | ClusterRoles>>system:node-bootstrapper
ServiceAccount | default | * | * | | | | * | ClusterRoles>>cluster-admin
ServiceAccount | default | * | * | * | * | | | ClusterRoles>>cluster-admin
ServiceAccount | default | create | olm | core | configmaps | 5e5932a6bfa63515cdf4466e9d3d1442f14b290645ba0ee54de32b5c67d5155 | | Roles>>olm/5e5932a6bfa63515cdf4466e9d3d1442f14b290645ba0ee54de32b5c67d5155
| | | | | | | |
ServiceAccount | default | get | olm | core | configmaps | 5e5932a6bfa63515cdf4466e9d3d1442f14b290645ba0ee54de32b5c67d5155 | | Roles>>olm/5e5932a6bfa63515cdf4466e9d3d1442f14b290645ba0ee54de32b5c67d5155
| | | | | | | |
ServiceAccount | default | update | olm | core | configmaps | 5e5932a6bfa63515cdf4466e9d3d1442f14b290645ba0ee54de32b5c67d5155 | | Roles>>olm/5e5932a6bfa63515cdf4466e9d3d1442f14b290645ba0ee54de32b5c67d5155
| | | | | | | |
There is two default
service account in two different namespace and the user interface does not display this information. This is confusing.
The implemented feature, will display the SUBJECT
with the following format: "namespace:serviceAccountName".
Here is an example for the same cluster:
rbac-tool policy-rules -e default
TYPE | SUBJECT | VERBS | NAMESPACE | API GROUP | KIND | NAMES | NONRESOURCEURI | ORIGINATED FROM
-----------------+-------------------------------------------------+--------+-------------+---------------------+---------------------------------------+-----------------------------------------------------------------+----------------+---------------------------------------------------------------------------------
Group | system:bootstrappers:kubeadm:default-node-token | create | * | certificates.k8s.io | certificatesigningrequests | | | ClusterRoles>>system:node-bootstrapper
Group | system:bootstrappers:kubeadm:default-node-token | create | * | certificates.k8s.io | certificatesigningrequests/nodeclient | | | ClusterRoles>>system:certificates.k8s.io:certificatesigningrequests:nodeclient
Group | system:bootstrappers:kubeadm:default-node-token | get | * | certificates.k8s.io | certificatesigningrequests | | | ClusterRoles>>system:node-bootstrapper
Group | system:bootstrappers:kubeadm:default-node-token | get | * | core | nodes | | | ClusterRoles>>kubeadm:get-nodes
Group | system:bootstrappers:kubeadm:default-node-token | get | kube-system | core | configmaps | kube-proxy | | Roles>>kube-system/kube-proxy
Group | system:bootstrappers:kubeadm:default-node-token | get | kube-system | core | configmaps | kubeadm-config | | Roles>>kube-system/kubeadm:nodes-kubeadm-config
| | | | | | | |
Group | system:bootstrappers:kubeadm:default-node-token | get | kube-system | core | configmaps | kubelet-config | | Roles>>kube-system/kubeadm:kubelet-config
| | | | | | | |
Group | system:bootstrappers:kubeadm:default-node-token | list | * | certificates.k8s.io | certificatesigningrequests | | | ClusterRoles>>system:node-bootstrapper
Group | system:bootstrappers:kubeadm:default-node-token | watch | * | certificates.k8s.io | certificatesigningrequests | | | ClusterRoles>>system:node-bootstrapper
ServiceAccount | monitoring:default | * | * | | | | * | ClusterRoles>>cluster-admin
ServiceAccount | monitoring:default | * | * | * | * | | | ClusterRoles>>cluster-admin
ServiceAccount | olm:default | create | olm | core | configmaps | 5e5932a6bfa63515cdf4466e9d3d1442f14b290645ba0ee54de32b5c67d5155 | | Roles>>olm/5e5932a6bfa63515cdf4466e9d3d1442f14b290645ba0ee54de32b5c67d5155
| | | | | | | |
ServiceAccount | olm:default | get | olm | core | configmaps | 5e5932a6bfa63515cdf4466e9d3d1442f14b290645ba0ee54de32b5c67d5155 | | Roles>>olm/5e5932a6bfa63515cdf4466e9d3d1442f14b290645ba0ee54de32b5c67d5155
| | | | | | | |
ServiceAccount | olm:default | update | olm | core | configmaps | 5e5932a6bfa63515cdf4466e9d3d1442f14b290645ba0ee54de32b5c67d5155 | | Roles>>olm/5e5932a6bfa63515cdf4466e9d3d1442f14b290645ba0ee54de32b5c67d5155
| | | | | | | |
What happened:
I tried to redirect the file with the following command, but the result was displayed on the screen and test.yaml was empty.
kubectl rbac-tool gen --deny-resources=secrets. --allowed-verbs=get,list,watch > test.yaml
However, the following command works fine.
kubectl rbac-tool gen --deny-resources=secrets. --allowed-verbs=get,list,watch 2> test.yaml
I thought that outputting the result to stderr might be a bug, but there might be circumstances that I don't know about. Why is it being output to the error?
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
kubectl version
):What happened:
I installed rbac-tool
using both krew
and binary download, both failed with same error:
➜ ~ rbac-tool who-can
[1] 86368 killed rbac-tool who-can
➜ ~ kubectl rbac-tool who-can
[1] 86841 killed kubectl rbac-tool who-can
➜ ~ kubectl rbac-tool --help
[1] 86883 killed kubectl rbac-tool --help
➜ ~ kubectl rbac-tool --help
[1] 86922 killed kubectl rbac-tool --help
➜ ~ rbac-tool who-can
[1] 86936 killed rbac-tool who-can
What you expected to happen:
rbac-tool
should run successfully.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
kubectl version
):➜ ~ kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T20:58:09Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"darwin/arm64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T20:59:07Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
krew
and binary download.What would you like to be added:
Make it possible to write the logs to stderr.
Why is this needed:
The logs are not separate from the outfile data. So to make the following command work, some filtering is required:
kubectl rbac-tool viz --outformat dot --outfile - | dot -Tpng >foo.png
What happened:
Using the install via curl option, it fails to validate the checksum for rbac-tool_v1.1.1_linux_amd64
:
$ curl https://raw.githubusercontent.com/alcideio/rbac-tool/master/download.sh | bash
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 9331 100 9331 0 0 64798 0 --:--:-- --:--:-- --:--:-- 64798
alcideio/rbac-tool info checking GitHub for latest tag
alcideio/rbac-tool info found version: 1.1.1 for v1.1.1/linux/amd64
alcideio/rbac-tool err hash_sha256_verify checksum for '/tmp/tmp.rSW6XMKhUl/rbac-tool_v1.1.1_linux_amd64' did not verify 6916b6f609b027ccd7d6573a40f62492a84bc7445592805d6d3fc838f3e34dc4
ecdc8b365b8f9bb4303d194e777a9e7fdf3376158e3a2fb78cf7425007118a1d vs 6916b6f609b027ccd7d6573a40f62492a84bc7445592805d6d3fc838f3e34dc4
What you expected to happen:
The binary should match the checksum
How to reproduce it (as minimally and precisely as possible):
See above
Anything else we need to know?:
Probably not.
Environment:
kubectl version
):What happened:
When running rbac-tool viz ... rules are not shown
What happened:
I installed
[trutledge@localhost viscrash]$ rbac-tool version
Version: 0.10.0
Commit: 35e5db8
[trutledge@localhost viscrash]$
and ran
rbac-tool vis --cluster-context MYCLUSTER
And got
`[trutledge@localhost viscrash]$ rbac-tool vis --cluster-context --redact--
[alcide-rbactool] Namespaces included '*'
[alcide-rbactool] Namespaces excluded 'kube-system'
[alcide-rbactool] Connecting to cluster '--redact--'
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1261885]
goroutine 1 [running]:
github.com/alcideio/rbac-tool/pkg/visualize.(*RbacViz).newRoleAndRulesNodePair(0xc000139c80, 0xc0003c24e0, 0xc00046c510, 0x9, 0xc00051eae0, 0x19, 0xc00046c5b0, 0x4, 0xc00051eb00, 0x13, ...)
/home/runner/work/rbac-tool/rbac-tool/pkg/visualize/rbacviz.go:302 +0x1f5
github.com/alcideio/rbac-tool/pkg/visualize.(*RbacViz).renderGraph(0xc000139c80, 0xc0002ca600)
/home/runner/work/rbac-tool/rbac-tool/pkg/visualize/rbacviz.go:204 +0x425
github.com/alcideio/rbac-tool/pkg/visualize.CreateRBACGraph(0xc0002ca600, 0x2a, 0xc00013dd30)
/home/runner/work/rbac-tool/rbac-tool/pkg/visualize/rbacviz.go:38 +0xef
github.com/alcideio/rbac-tool/cmd.NewCommandVisualize.func1(0xc000318b00, 0xc0001ef540, 0x0, 0x2, 0x0, 0x0)
/home/runner/work/rbac-tool/rbac-tool/cmd/visualize_cmd.go:66 +0x1da
github.com/spf13/cobra.(*Command).execute(0xc000318b00, 0xc0001ef500, 0x2, 0x2, 0xc000318b00, 0xc0001ef500)
/home/runner/pkg/mod/github.com/spf13/[email protected]/command.go:840 +0x460
github.com/spf13/cobra.(*Command).ExecuteC(0xc000318000, 0xc000072750, 0xc00013df50, 0x40576f)
/home/runner/pkg/mod/github.com/spf13/[email protected]/command.go:945 +0x317
github.com/spf13/cobra.(*Command).Execute(...)
/home/runner/pkg/mod/github.com/spf13/[email protected]/command.go:885
main.main()
/home/runner/work/rbac-tool/rbac-tool/main.go:61 +0x2b
[trutledge@localhost viscrash]$
[trutledge@localhost viscrash]$
`
What you expected to happen:
not crashing
How to reproduce it (as minimally and precisely as possible):
Unsure.
Anything else we need to know?:
The nil comes from : rbac-tool/src/rbac-tool/pkg/visualize/rbacviz.go:
360 if rulesText == "" {
361 return nil
362 }
I don't have enough context to share beyond that.
Environment:
kubectl version
):Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"clean", BuildDate:"2020-06-26T03:47:41Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.6", GitCommit:"7015f71e75f670eb9e7ebd4b5749639d42e20079", GitTreeState:"clean", BuildDate:"2019-11-13T11:11:50Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
On premises install.
What happened:
$ kubectl krew install rbac-tool
Updated the local copy of plugin index.
Installing plugin: rbac-tool
Installed plugin: rbac-tool
\
| Use this plugin:
| kubectl rbac-tool
| Documentation:
| https://github.com/alcideio/rbac-tool
/
WARNING: You installed plugin "rbac-tool" from the krew-index plugin repository.
These plugins are not audited for security by the Krew maintainers.
Run them at your own risk.
bash-5.0$ kubectl rbac-tool
E0816 17:50:50.905695 32074 run.go:120] "command failed" err="unknown command \"rbac-tool\" for \"kubectl\""
What you expected to happen:
Expected kubectl rbac-tool
to run.
How to reproduce it (as minimally and precisely as possible):
Follow the steps I followed.
Anything else we need to know?:
Environment:
MacOS Monterey 12.2
kubectl version
):$ kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:33:37Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:35:25Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}
[alcide-rbactool] Namespaces included '*'
[alcide-rbactool] Namespaces excluded 'kube-system'
[alcide-rbactool] Connecting to cluster ''
[alcide-rbactool] Generating Graph and Saving as 'rbac.html'
What would you like to be added:
rbac-tool viz
should add the option to read rbac resources from files and generate a graph
Why is this needed:
Useful for evaluating and exploring rbac policies in development stage
What happened:
When running "show" against an 1.23 cluster ive noticed some rbac rules are duplicated.
Namely
"autoscaling" and "policy"
I took a look at why this is happening and found basically the same groups with different versions get iterated over
What you expected to happen:
Groups with different versions get merged
How to reproduce it (as minimally and precisely as possible):
run against a cluster that has multiple versions of resources
Anything else we need to know?:
Environment:
kubectl version
): 1.23.13What happened:
The page generated seems to have data however the data is not visualized. Only the Legend can be seen.
What you expected to happen:
Visualized rbac controls.
How to reproduce it (as minimally and precisely as possible):
./bin/rbac-tool visualize
Anything else we need to know?:
No.
Environment:
kubectl version
):What would you like to be added:
Add the option to add subcommands like pods/exec
into generated RBAC files
Why is this needed:
Sometimes you want to give people more granular permissions on certain things and having a complete list on all avaiable subcommands in your rbac so you can easily do so would be nice
What happened:
I'm getting segmentation fault on kubectl rbac-tool who-can create clusterrolebinding
What you expected to happen:
print out who can create clusterrolebinding
How to reproduce it (as minimally and precisely as possible):
not sure
Anything else we need to know?:
unexpected fault address 0x0
fatal error: fault
[signal SIGSEGV: segmentation violation code=0x80 addr=0x0 pc=0x4631bf]
goroutine 1 [running]:
runtime.throw({0x1535804?, 0x30?})
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/panic.go:1047 +0x5d fp=0xc000510938 sp=0xc000510908 pc=0x435afd
runtime.sigpanic()
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/signal_unix.go:842 +0x2c5 fp=0xc000510988 sp=0xc000510938 pc=0x44b505
aeshashbody()
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1366 +0x39f fp=0xc000510990 sp=0xc000510988 pc=0x4631bf
runtime.mapiternext(0xc0004f47c0)
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/map.go:936 +0x2eb fp=0xc000510a00 sp=0xc000510990 pc=0x40fe2b
runtime.mapiterinit(0x1?, 0x7?, 0x1?)
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/map.go:863 +0x236 fp=0xc000510a20 sp=0xc000510a00 pc=0x40faf6
reflect.mapiterinit(0x146cd00?, 0xc0001283c0?, 0x4dfdc7?)
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/map.go:1375 +0x19 fp=0xc000510a48 sp=0xc000510a20 pc=0x45ff99
github.com/modern-go/reflect2.(*UnsafeMapType).UnsafeIterate(...)
/home/runner/pkg/mod/github.com/modern-go/[email protected]/unsafe_map.go:112
github.com/json-iterator/go.(*sortKeysMapEncoder).Encode(0xc000432060, 0xc0000104f0, 0xc00008b320)
/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect_map.go:291 +0x236 fp=0xc000510bb8 sp=0xc000510a48 pc=0x7c37b6
github.com/json-iterator/go.(*placeholderEncoder).Encode(0x13a5e00?, 0x1767501?, 0xc00008b338?)
/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect.go:332 +0x22 fp=0xc000510be0 sp=0xc000510bb8 pc=0x7bc3c2
github.com/json-iterator/go.(*structFieldEncoder).Encode(0xc0004324e0, 0x12bd69b?, 0xc00008b320)
/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect_struct_encoder.go:110 +0x56 fp=0xc000510c58 sp=0xc000510be0 pc=0x7d0ff6
github.com/json-iterator/go.(*structEncoder).Encode(0xc000432540, 0x900?, 0xc00008b320)
/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect_struct_encoder.go:158 +0x765 fp=0xc000510d40 sp=0xc000510c58 pc=0x7d1a05
github.com/json-iterator/go.(*OptionalEncoder).Encode(0xc00008b320?, 0xc000130960?, 0xc000510dd0?)
/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect_optional.go:70 +0xb0 fp=0xc000510d90 sp=0xc000510d40 pc=0x7c8b90
github.com/json-iterator/go.(*placeholderEncoder).Encode(0x13a5e00?, 0xc0004f4601?, 0xc00008b338?)
/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect.go:332 +0x22 fp=0xc000510db8 sp=0xc000510d90 pc=0x7bc3c2
github.com/json-iterator/go.(*structFieldEncoder).Encode(0xc0004e0360, 0x12ed259?, 0xc00008b320)
/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect_struct_encoder.go:110 +0x56 fp=0xc000510e30 sp=0xc000510db8 pc=0x7d0ff6
github.com/json-iterator/go.(*structEncoder).Encode(0xc0004e0420, 0xc0001306c0?, 0xc00008b320)
/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect_struct_encoder.go:158 +0x765 fp=0xc000510f18 sp=0xc000510e30 pc=0x7d1a05
github.com/json-iterator/go.(*placeholderEncoder).Encode(0x13a5e00?, 0x7d0801?, 0xc00008b338?)
/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect.go:332 +0x22 fp=0xc000510f40 sp=0xc000510f18 pc=0x7bc3c2
github.com/json-iterator/go.(*structFieldEncoder).Encode(0xc0004e06c0, 0x12bd603?, 0xc00008b320)
/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect_struct_encoder.go:110 +0x56 fp=0xc000510fb8 sp=0xc000510f40 pc=0x7d0ff6
github.com/json-iterator/go.(*structEncoder).Encode(0xc0004e0720, 0x135e5e0?, 0xc00008b320)
/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect_struct_encoder.go:158 +0x765 fp=0xc0005110a0 sp=0xc000510fb8 pc=0x7d1a05
github.com/json-iterator/go.(*sliceEncoder).Encode(0xc0003768d0, 0xc0000df448, 0xc00008b320)
/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect_slice.go:38 +0x2e4 fp=0xc000511158 sp=0xc0005110a0 pc=0x7c9644
github.com/json-iterator/go.(*structFieldEncoder).Encode(0xc0004e14d0, 0x12c3059?, 0xc00008b320)
/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect_struct_encoder.go:110 +0x56 fp=0xc0005111d0 sp=0xc000511158 pc=0x7d0ff6
github.com/json-iterator/go.(*structEncoder).Encode(0xc0004e1620, 0x0?, 0xc00008b320)
/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect_struct_encoder.go:158 +0x765 fp=0xc0005112b8 sp=0xc0005111d0 pc=0x7d1a05
github.com/json-iterator/go.(*OptionalEncoder).Encode(0xc000202f00?, 0x0?, 0x0?)
/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect_optional.go:70 +0xb0 fp=0xc000511308 sp=0xc0005112b8 pc=0x7c8b90
github.com/json-iterator/go.(*onePtrEncoder).Encode(0xc0003ce670, 0xc0000df3f0, 0xc0004e08d0?)
/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect.go:219 +0x82 fp=0xc000511340 sp=0xc000511308 pc=0x7bb982
github.com/json-iterator/go.(*Stream).WriteVal(0xc00008b320, {0x14098c0, 0xc0000df3f0})
/home/runner/pkg/mod/github.com/json-iterator/[email protected]/reflect.go:98 +0x166 fp=0xc0005113b0 sp=0xc000511340 pc=0x7baca6
github.com/json-iterator/go.(*frozenConfig).Marshal(0xc000202f00, {0x14098c0, 0xc0000df3f0})
/home/runner/pkg/mod/github.com/json-iterator/[email protected]/config.go:299 +0xc9 fp=0xc000511448 sp=0xc0005113b0 pc=0x7b1f29
k8s.io/apimachinery/pkg/runtime/serializer/json.(*Serializer).doEncode(0x12a470f?, {0x175b5a0?, 0xc0000df3f0?}, {0x1752e20, 0xc00009fe30})
/home/runner/pkg/mod/k8s.io/[email protected]/pkg/runtime/serializer/json/json.go:305 +0x6d fp=0xc0005114e0 sp=0xc000511448 pc=0xbe9c6d
k8s.io/apimachinery/pkg/runtime/serializer/json.(*Serializer).Encode(0xc0003a2aa0, {0x175b5a0, 0xc0000df3f0}, {0x1752e20, 0xc00009fe30})
/home/runner/pkg/mod/k8s.io/[email protected]/pkg/runtime/serializer/json/json.go:300 +0xfc fp=0xc000511540 sp=0xc0005114e0 pc=0xbe9b9c
k8s.io/apimachinery/pkg/runtime/serializer/versioning.(*codec).doEncode(0xc000381400, {0x175b550?, 0xc00008b260}, {0x1752e20, 0xc00009fe30})
/home/runner/pkg/mod/k8s.io/[email protected]/pkg/runtime/serializer/versioning/versioning.go:244 +0x946 fp=0xc0005118c8 sp=0xc000511540 pc=0xbf7b86
k8s.io/apimachinery/pkg/runtime/serializer/versioning.(*codec).Encode(0xc000381400, {0x175b550, 0xc00008b260}, {0x1752e20, 0xc00009fe30})
/home/runner/pkg/mod/k8s.io/[email protected]/pkg/runtime/serializer/versioning/versioning.go:184 +0x106 fp=0xc000511928 sp=0xc0005118c8 pc=0xbf71e6
k8s.io/apimachinery/pkg/runtime.Encode({0x7fb15533bad8, 0xc000381400}, {0x175b550, 0xc00008b260})
/home/runner/pkg/mod/k8s.io/[email protected]/pkg/runtime/codec.go:50 +0x64 fp=0xc000511968 sp=0xc000511928 pc=0x80f164
k8s.io/client-go/tools/clientcmd.Write(...)
/home/runner/pkg/mod/k8s.io/[email protected]/tools/clientcmd/loader.go:469
k8s.io/client-go/tools/clientcmd.WriteToFile({{0x0, 0x0}, {0x0, 0x0}, {0x0, 0xc000543c20}, 0xc000543c50, 0xc000543c80, 0xc000543cb0, {0xc0005500b0, ...}, ...}, ...)
/home/runner/pkg/mod/k8s.io/[email protected]/tools/clientcmd/loader.go:422 +0xa8 fp=0xc0005119e0 sp=0xc000511968 pc=0x1019aa8
k8s.io/client-go/tools/clientcmd.ModifyConfig({0x1769560, 0xc0003a3720}, {{0x0, 0x0}, {0x0, 0x0}, {0x0, 0xc000542ea0}, 0xc000542ed0, 0xc000542f00, ...}, ...)
/home/runner/pkg/mod/k8s.io/[email protected]/tools/clientcmd/config.go:291 +0xcf8 fp=0xc000512108 sp=0xc0005119e0 pc=0x1015c78
k8s.io/client-go/tools/clientcmd.(*persister).Persist(0xc0004de240, 0xc000542210)
/home/runner/pkg/mod/k8s.io/[email protected]/tools/clientcmd/config.go:374 +0x11a fp=0xc0005121f8 sp=0xc000512108 pc=0x101661a
k8s.io/client-go/plugin/pkg/client/auth/oidc.(*oidcAuthProvider).idToken(0xc00012ab10)
/home/runner/pkg/mod/k8s.io/[email protected]/plugin/pkg/client/auth/oidc/oidc.go:282 +0x966 fp=0xc0005123f8 sp=0xc0005121f8 pc=0xfe7666
k8s.io/client-go/plugin/pkg/client/auth/oidc.(*roundTripper).RoundTrip(0xc000182b10, 0xc00054c400)
/home/runner/pkg/mod/k8s.io/[email protected]/plugin/pkg/client/auth/oidc/oidc.go:200 +0x67 fp=0xc000512500 sp=0xc0005123f8 pc=0xfe69a7
k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0000544e0, 0xc00054c300)
/home/runner/pkg/mod/k8s.io/[email protected]/transport/round_trippers.go:159 +0x350 fp=0xc0005125f8 sp=0xc000512500 pc=0xf52b90
net/http.send(0xc00054c200, {0x1755600, 0xc0000544e0}, {0x14d7960?, 0x4c0301?, 0x21de500?})
/opt/hostedtoolcache/go/1.19.9/x64/src/net/http/client.go:251 +0x5f7 fp=0xc0005127f0 sp=0xc0005125f8 pc=0x731f77
net/http.(*Client).send(0xc0004f8000, 0xc00054c200, {0x0?, 0xc000512898?, 0x21de500?})
/opt/hostedtoolcache/go/1.19.9/x64/src/net/http/client.go:175 +0x9b fp=0xc000512868 sp=0xc0005127f0 pc=0x7317fb
net/http.(*Client).do(0xc0004f8000, 0xc00054c200)
/opt/hostedtoolcache/go/1.19.9/x64/src/net/http/client.go:715 +0x8fc fp=0xc000512a58 sp=0xc000512868 pc=0x733b7c
net/http.(*Client).Do(...)
/opt/hostedtoolcache/go/1.19.9/x64/src/net/http/client.go:581
k8s.io/client-go/rest.(*Request).request(0xc0001484b0, {0x1767c50, 0xc00004c320}, 0x1?)
/home/runner/pkg/mod/k8s.io/[email protected]/rest/request.go:881 +0x51e fp=0xc000512c48 sp=0xc000512a58 pc=0xf7147e
k8s.io/client-go/rest.(*Request).Do(0x153570a?, {0x1767c50?, 0xc00004c320?})
/home/runner/pkg/mod/k8s.io/[email protected]/rest/request.go:954 +0xc7 fp=0xc000512cf8 sp=0xc000512c48 pc=0xf72087
k8s.io/client-go/discovery.(*DiscoveryClient).ServerGroups(0xc000054540)
/home/runner/pkg/mod/k8s.io/[email protected]/discovery/discovery_client.go:159 +0xae fp=0xc000512fd8 sp=0xc000512cf8 pc=0xf76a2e
k8s.io/client-go/discovery.ServerPreferredResources({0x176e1a0, 0xc000054540})
/home/runner/pkg/mod/k8s.io/[email protected]/discovery/discovery_client.go:287 +0x42 fp=0xc0005137a8 sp=0xc000512fd8 pc=0xf77da2
k8s.io/client-go/discovery.(*DiscoveryClient).ServerPreferredResources.func1()
/home/runner/pkg/mod/k8s.io/[email protected]/discovery/discovery_client.go:387 +0x25 fp=0xc0005137c8 sp=0xc0005137a8 pc=0xf78f65
k8s.io/client-go/discovery.withRetries(0x2, 0xc0005137f0)
/home/runner/pkg/mod/k8s.io/[email protected]/discovery/discovery_client.go:450 +0x72 fp=0xc0005137e0 sp=0xc0005137c8 pc=0xf797b2
k8s.io/client-go/discovery.(*DiscoveryClient).ServerPreferredResources(0xc0003a3770?)
/home/runner/pkg/mod/k8s.io/[email protected]/discovery/discovery_client.go:386 +0x3a fp=0xc000513810 sp=0xc0005137e0 pc=0xf78efa
github.com/alcideio/rbac-tool/pkg/kube.NewClient({0x0, 0x0})
/home/runner/work/rbac-tool/rbac-tool/pkg/kube/client.go:60 +0x1a5 fp=0xc0005138e0 sp=0xc000513810 pc=0x101e225
github.com/alcideio/rbac-tool/cmd.NewCommandWhoCan.func1(0xc0004cf600?, {0xc0004de2a0?, 0x2?, 0x2?})
/home/runner/work/rbac-tool/rbac-tool/cmd/whocan_cmd.go:122 +0x1fc fp=0xc000513da8 sp=0xc0005138e0 pc=0x129685c
github.com/spf13/cobra.(*Command).execute(0xc0004cf600, {0xc0004de260, 0x2, 0x2})
/home/runner/pkg/mod/github.com/spf13/[email protected]/command.go:842 +0x67c fp=0xc000513e80 sp=0xc000513da8 pc=0x11966dc
github.com/spf13/cobra.(*Command).ExecuteC(0xc0004ce000)
/home/runner/pkg/mod/github.com/spf13/[email protected]/command.go:950 +0x39d fp=0xc000513f38 sp=0xc000513e80 pc=0x1196cbd
github.com/spf13/cobra.(*Command).Execute(...)
/home/runner/pkg/mod/github.com/spf13/[email protected]/command.go:887
main.main()
/home/runner/work/rbac-tool/rbac-tool/main.go:65 +0x1e fp=0xc000513f80 sp=0xc000513f38 pc=0x1297bbe
runtime.main()
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:250 +0x212 fp=0xc000513fe0 sp=0xc000513f80 pc=0x438352
runtime.goexit()
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc000513fe8 sp=0xc000513fe0 pc=0x465c81
goroutine 2 [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:363 +0xd6 fp=0xc000068fb0 sp=0xc000068f90 pc=0x438716
runtime.goparkunlock(...)
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:369
runtime.forcegchelper()
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:302 +0xad fp=0xc000068fe0 sp=0xc000068fb0 pc=0x4385ad
runtime.goexit()
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc000068fe8 sp=0xc000068fe0 pc=0x465c81
created by runtime.init.6
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:290 +0x25
goroutine 3 [GC sweep wait]:
runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:363 +0xd6 fp=0xc000069790 sp=0xc000069770 pc=0x438716
runtime.goparkunlock(...)
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:369
runtime.bgsweep(0x0?)
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mgcsweep.go:297 +0xd7 fp=0xc0000697c8 sp=0xc000069790 pc=0x424e37
runtime.gcenable.func1()
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mgc.go:178 +0x26 fp=0xc0000697e0 sp=0xc0000697c8 pc=0x419a86
runtime.goexit()
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc0000697e8 sp=0xc0000697e0 pc=0x465c81
created by runtime.gcenable
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mgc.go:178 +0x6b
goroutine 4 [GC scavenge wait]:
runtime.gopark(0xc000088000?, 0x1750558?, 0x0?, 0x0?, 0x0?)
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:363 +0xd6 fp=0xc000069f70 sp=0xc000069f50 pc=0x438716
runtime.goparkunlock(...)
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:369
runtime.(*scavengerState).park(0x21de720)
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mgcscavenge.go:389 +0x53 fp=0xc000069fa0 sp=0xc000069f70 pc=0x422e93
runtime.bgscavenge(0x0?)
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mgcscavenge.go:622 +0x65 fp=0xc000069fc8 sp=0xc000069fa0 pc=0x423485
runtime.gcenable.func2()
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mgc.go:179 +0x26 fp=0xc000069fe0 sp=0xc000069fc8 pc=0x419a26
runtime.goexit()
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc000069fe8 sp=0xc000069fe0 pc=0x465c81
created by runtime.gcenable
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mgc.go:179 +0xaa
goroutine 5 [finalizer wait]:
runtime.gopark(0x438a97?, 0x49?, 0xe8?, 0xda?, 0xc000068770?)
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:363 +0xd6 fp=0xc000068628 sp=0xc000068608 pc=0x438716
runtime.goparkunlock(...)
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:369
runtime.runfinq()
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mfinal.go:180 +0x10f fp=0xc0000687e0 sp=0xc000068628 pc=0x418b8f
runtime.goexit()
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc0000687e8 sp=0xc0000687e0 pc=0x465c81
created by runtime.createfing
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mfinal.go:157 +0x45
goroutine 6 [chan receive]:
runtime.gopark(0xc00006a6d8?, 0x43e57b?, 0x20?, 0xa7?, 0x454245?)
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:363 +0xd6 fp=0xc00006a6c8 sp=0xc00006a6a8 pc=0x438716
runtime.chanrecv(0xc000180000, 0xc00006a7a0, 0x1)
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/chan.go:583 +0x49b fp=0xc00006a758 sp=0xc00006a6c8 pc=0x406cdb
runtime.chanrecv2(0x12a05f200?, 0x0?)
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/chan.go:447 +0x18 fp=0xc00006a780 sp=0xc00006a758 pc=0x406818
k8s.io/klog.(*loggingT).flushDaemon(0x0?)
/home/runner/pkg/mod/k8s.io/[email protected]/klog.go:1010 +0x6a fp=0xc00006a7c8 sp=0xc00006a780 pc=0x50964a
k8s.io/klog.init.0.func1()
/home/runner/pkg/mod/k8s.io/[email protected]/klog.go:411 +0x26 fp=0xc00006a7e0 sp=0xc00006a7c8 pc=0x507326
runtime.goexit()
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc00006a7e8 sp=0xc00006a7e0 pc=0x465c81
created by k8s.io/klog.init.0
/home/runner/pkg/mod/k8s.io/[email protected]/klog.go:411 +0xef
goroutine 7 [chan receive]:
runtime.gopark(0x1b17c9725b8?, 0x0?, 0x20?, 0xaf?, 0x454245?)
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:363 +0xd6 fp=0xc00006aec8 sp=0xc00006aea8 pc=0x438716
runtime.chanrecv(0xc000114000, 0xc00006afa0, 0x1)
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/chan.go:583 +0x49b fp=0xc00006af58 sp=0xc00006aec8 pc=0x406cdb
runtime.chanrecv2(0x12a05f200?, 0x0?)
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/chan.go:447 +0x18 fp=0xc00006af80 sp=0xc00006af58 pc=0x406818
k8s.io/klog/v2.(*loggingT).flushDaemon(0x0?)
/home/runner/pkg/mod/k8s.io/klog/[email protected]/klog.go:1131 +0x6a fp=0xc00006afc8 sp=0xc00006af80 pc=0x6279ea
k8s.io/klog/v2.init.0.func1()
/home/runner/pkg/mod/k8s.io/klog/[email protected]/klog.go:416 +0x26 fp=0xc00006afe0 sp=0xc00006afc8 pc=0x625646
runtime.goexit()
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc00006afe8 sp=0xc00006afe0 pc=0x465c81
created by k8s.io/klog/v2.init.0
/home/runner/pkg/mod/k8s.io/klog/[email protected]/klog.go:416 +0xef
goroutine 8 [GC worker (idle)]:
runtime.gopark(0x5d8781874f?, 0x0?, 0x0?, 0x0?, 0x0?)
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:363 +0xd6 fp=0xc00006b750 sp=0xc00006b730 pc=0x438716
runtime.gcBgMarkWorker()
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mgc.go:1235 +0xf1 fp=0xc00006b7e0 sp=0xc00006b750 pc=0x41bbd1
runtime.goexit()
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc00006b7e8 sp=0xc00006b7e0 pc=0x465c81
created by runtime.gcBgMarkStartWorkers
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mgc.go:1159 +0x25
goroutine 17 [GC worker (idle)]:
runtime.gopark(0x5d8784fecc?, 0x0?, 0x0?, 0x0?, 0x0?)
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:363 +0xd6 fp=0xc000064750 sp=0xc000064730 pc=0x438716
runtime.gcBgMarkWorker()
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mgc.go:1235 +0xf1 fp=0xc0000647e0 sp=0xc000064750 pc=0x41bbd1
runtime.goexit()
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc0000647e8 sp=0xc0000647e0 pc=0x465c81
created by runtime.gcBgMarkStartWorkers
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mgc.go:1159 +0x25
goroutine 33 [GC worker (idle)]:
runtime.gopark(0x5d86fd0988?, 0x0?, 0x0?, 0x0?, 0x0?)
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:363 +0xd6 fp=0xc00019a750 sp=0xc00019a730 pc=0x438716
runtime.gcBgMarkWorker()
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mgc.go:1235 +0xf1 fp=0xc00019a7e0 sp=0xc00019a750 pc=0x41bbd1
runtime.goexit()
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc00019a7e8 sp=0xc00019a7e0 pc=0x465c81
created by runtime.gcBgMarkStartWorkers
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mgc.go:1159 +0x25
goroutine 34 [GC worker (idle)]:
runtime.gopark(0x5d87845fec?, 0x0?, 0x0?, 0x0?, 0x0?)
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:363 +0xd6 fp=0xc00019af50 sp=0xc00019af30 pc=0x438716
runtime.gcBgMarkWorker()
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mgc.go:1235 +0xf1 fp=0xc00019afe0 sp=0xc00019af50 pc=0x41bbd1
runtime.goexit()
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc00019afe8 sp=0xc00019afe0 pc=0x465c81
created by runtime.gcBgMarkStartWorkers
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/mgc.go:1159 +0x25
goroutine 9 [select]:
runtime.gopark(0xc000064fa0?, 0x3?, 0x0?, 0x0?, 0xc000064f82?)
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:363 +0xd6 fp=0xc000064e08 sp=0xc000064de8 pc=0x438716
runtime.selectgo(0xc000064fa0, 0xc000064f7c, 0x0?, 0x0, 0x0?, 0x1)
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/select.go:328 +0x7bc fp=0xc000064f48 sp=0xc000064e08 pc=0x447a9c
net/http.setRequestCancel.func4()
/opt/hostedtoolcache/go/1.19.9/x64/src/net/http/client.go:397 +0x8b fp=0xc000064fe0 sp=0xc000064f48 pc=0x732e2b
runtime.goexit()
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc000064fe8 sp=0xc000064fe0 pc=0x465c81
created by net/http.setRequestCancel
/opt/hostedtoolcache/go/1.19.9/x64/src/net/http/client.go:396 +0x44a
goroutine 21 [IO wait]:
runtime.gopark(0x1d21?, 0xb?, 0x0?, 0x0?, 0x3?)
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/proc.go:363 +0xd6 fp=0xc000079618 sp=0xc0000795f8 pc=0x438716
runtime.netpollblock(0x4b2f85?, 0xa?, 0x0?)
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/netpoll.go:526 +0xf7 fp=0xc000079650 sp=0xc000079618 pc=0x4312d7
internal/poll.runtime_pollWait(0x7fb1554a5ef8, 0x72)
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/netpoll.go:305 +0x89 fp=0xc000079670 sp=0xc000079650 pc=0x4608e9
internal/poll.(*pollDesc).wait(0xc00011ca00?, 0xc000018a00?, 0x0)
/opt/hostedtoolcache/go/1.19.9/x64/src/internal/poll/fd_poll_runtime.go:84 +0x32 fp=0xc000079698 sp=0xc000079670 pc=0x4cd0b2
internal/poll.(*pollDesc).waitRead(...)
/opt/hostedtoolcache/go/1.19.9/x64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00011ca00, {0xc000018a00, 0x2500, 0x2500})
/opt/hostedtoolcache/go/1.19.9/x64/src/internal/poll/fd_unix.go:167 +0x25a fp=0xc000079718 sp=0xc000079698 pc=0x4ce41a
net.(*netFD).Read(0xc00011ca00, {0xc000018a00?, 0xc0002bb280?, 0xc0000191df?})
/opt/hostedtoolcache/go/1.19.9/x64/src/net/fd_posix.go:55 +0x29 fp=0xc000079760 sp=0xc000079718 pc=0x5e5b29
net.(*conn).Read(0xc00011a0a0, {0xc000018a00?, 0x4b5?, 0xc0002bb280?})
/opt/hostedtoolcache/go/1.19.9/x64/src/net/net.go:183 +0x45 fp=0xc0000797a8 sp=0xc000079760 pc=0x5f3905
crypto/tls.(*atLeastReader).Read(0xc00063ff38, {0xc000018a00?, 0x0?, 0x479008?})
/opt/hostedtoolcache/go/1.19.9/x64/src/crypto/tls/conn.go:787 +0x3d fp=0xc0000797f0 sp=0xc0000797a8 pc=0x6df53d
bytes.(*Buffer).ReadFrom(0xc000536978, {0x1752f20, 0xc00063ff38})
/opt/hostedtoolcache/go/1.19.9/x64/src/bytes/buffer.go:202 +0x98 fp=0xc000079848 sp=0xc0000797f0 pc=0x4794d8
crypto/tls.(*Conn).readFromUntil(0xc000536700, {0x1755820?, 0xc00011a0a0}, 0x1d26?)
/opt/hostedtoolcache/go/1.19.9/x64/src/crypto/tls/conn.go:809 +0xe5 fp=0xc000079888 sp=0xc000079848 pc=0x6df725
crypto/tls.(*Conn).readRecordOrCCS(0xc000536700, 0x0)
/opt/hostedtoolcache/go/1.19.9/x64/src/crypto/tls/conn.go:616 +0x116 fp=0xc000079c10 sp=0xc000079888 pc=0x6dcb76
crypto/tls.(*Conn).readRecord(...)
/opt/hostedtoolcache/go/1.19.9/x64/src/crypto/tls/conn.go:582
crypto/tls.(*Conn).Read(0xc000536700, {0xc000666000, 0x1000, 0x744380?})
/opt/hostedtoolcache/go/1.19.9/x64/src/crypto/tls/conn.go:1315 +0x16f fp=0xc000079c80 sp=0xc000079c10 pc=0x6e2aef
bufio.(*Reader).Read(0xc000323920, {0xc0000faf20, 0x9, 0x7527c5?})
/opt/hostedtoolcache/go/1.19.9/x64/src/bufio/bufio.go:237 +0x1bb fp=0xc000079cb8 sp=0xc000079c80 pc=0x4fccfb
io.ReadAtLeast({0x1752dc0, 0xc000323920}, {0xc0000faf20, 0x9, 0x9}, 0x9)
/opt/hostedtoolcache/go/1.19.9/x64/src/io/io.go:332 +0x9a fp=0xc000079d00 sp=0xc000079cb8 pc=0x471afa
io.ReadFull(...)
/opt/hostedtoolcache/go/1.19.9/x64/src/io/io.go:351
net/http.http2readFrameHeader({0xc0000faf20?, 0x9?, 0xc000542030?}, {0x1752dc0?, 0xc000323920?})
/opt/hostedtoolcache/go/1.19.9/x64/src/net/http/h2_bundle.go:1565 +0x6e fp=0xc000079d50 sp=0xc000079d00 pc=0x73c32e
net/http.(*http2Framer).ReadFrame(0xc0000faee0)
/opt/hostedtoolcache/go/1.19.9/x64/src/net/http/h2_bundle.go:1829 +0x95 fp=0xc000079e00 sp=0xc000079d50 pc=0x73cb95
net/http.(*http2clientConnReadLoop).run(0xc000079f98)
/opt/hostedtoolcache/go/1.19.9/x64/src/net/http/h2_bundle.go:8874 +0x130 fp=0xc000079f60 sp=0xc000079e00 pc=0x74f670
net/http.(*http2ClientConn).readLoop(0xc000538000)
/opt/hostedtoolcache/go/1.19.9/x64/src/net/http/h2_bundle.go:8770 +0x6f fp=0xc000079fc8 sp=0xc000079f60 pc=0x74eb8f
net/http.(*http2Transport).newClientConn.func1()
/opt/hostedtoolcache/go/1.19.9/x64/src/net/http/h2_bundle.go:7477 +0x26 fp=0xc000079fe0 sp=0xc000079fc8 pc=0x747866
runtime.goexit()
/opt/hostedtoolcache/go/1.19.9/x64/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc000079fe8 sp=0xc000079fe0 pc=0x465c81
created by net/http.(*http2Transport).newClientConn
/opt/hostedtoolcache/go/1.19.9/x64/src/net/http/h2_bundle.go:7477 +0xaaa
It also creates a config.lock file which is not getting removed after seg-fault
Config file contains only 1 cluster, regular access to the cluster via kubectl
work without any noticable issues.
I actually have no clue yet where to start debugging
Environment:
kubectl version
): Client: 1.27, Server 1.23What happened:
See https://imgur.com/a/TpcIyRx
The sa/c-sa exists in the namespace as per this ..
kubectl get sa,roles,rolebindings -n staranto
NAME SECRETS AGE
serviceaccount/builder 2 5d17h
serviceaccount/c-sa 2 14m
serviceaccount/default 2 5d17h
serviceaccount/deployer 2 5d17h
NAME AGE
role.rbac.authorization.k8s.io/role-core 15h
role.rbac.authorization.k8s.io/role-privileged 7m5s
NAME AGE
rolebinding.rbac.authorization.k8s.io/admin 5d17h
rolebinding.rbac.authorization.k8s.io/c-sa-core-rolebinding 13m
rolebinding.rbac.authorization.k8s.io/c-sa-privileged-rolebinding 7m5s
rolebinding.rbac.authorization.k8s.io/system:deployers 5d17h
rolebinding.rbac.authorization.k8s.io/system:image-builders 5d17h
rolebinding.rbac.authorization.k8s.io/system:image-pullers 5d17h````
**What you expected to happen**:
I expect the c-sa subject to be rendered in the namespace and not flagged as missing.
**How to reproduce it (as minimally and precisely as possible)**:
`rbac-tool viz --outformat dot --outfile rbac.dot --include-subjects c-sa`
**Anything else we need to know?**:
**Environment**:
- Kubernetes version (use `kubectl version`):
Client Version: v1.18.3
Server Version: v1.17.1+912792b
- Cloud provider or configuration:
OpenShift 4.4.9
- Install tools:
rbac-tool version
Version: 0.9.0
Commit: 3b08e35c143a8b7ecf3a43303bca1c7dfe19c837
- Others:
dot -V
dot - graphviz version 2.43.0 (0)
The first 3 rules should can be collapsed into 1 rule
TYPE | SUBJECT | VERBS | NAMESPACE | API GROUP | KIND | NAMES | NONRESOURCEURI | ORIGINATED FROM
+----------------+---------------+-------+-------------+-----------+---------+-------------+----------------+--------------------------------+
ServiceAccount | the-test-user | get | policyrules | core | * | | | Roles>>policyrules/some-rules
ServiceAccount | the-test-user | get | policyrules | core | * | | | Roles>>policyrules/more-rules
ServiceAccount | the-test-user | get | policyrules | core | secrets | some-secret | | Roles>>policyrules/some-rules
ServiceAccount | the-test-user | get | policyrules | core | secrets | | | Roles>>policyrules/more-rules
ServiceAccount | the-test-user | list | policyrules | core | secrets | some-secret | | Roles>>policyrules/some-rules
ServiceAccount | the-test-user | watch | policyrules | core | secrets | some-secret | | Roles>>policyrules/some-rules
Why is this needed:
Having that functionality can reduce the # of rules one needs to review. It only refers to the actual and effective policy
Is there a way to generate a policy with something like --allowed-objects? I'd like to create a role with just 1 resource instead of putting a list of things to deny? For example it seems like if I only want a policy with 1 allowed resource, I would have to feed in a list of every other resource to deny.
Ex -
rbac-tool gen --allowed-resources=pods. --allowed-verbs=get,list
rbac-tool gen --allowed-resources=pods.,services --allowed-verbs=get,list
instead of...
rbac-tool gen --deny-resources=secrets.,services.,serviceaccount.,pvc.,pv.,...(on and on) --allowed-verbs=get,list
When you generate a role for core api for example nodes are included. this will lead to problems as nodes for exemple are cluster wide resources and won't work on namespaces
What would you like to be added:
Add reasons and detailed information of ExclusionCount if possible
Why is this needed:
get the ExclusionCount: info. in Stats, but have no idea why and what.
For Why, is it because i miss some of the permission? If yes, which permission in detail
For What, what exactly the exclusion is?
What would you like to be added:
For each rule violations, provide the list of resources (Pod, Deployment, Job,...) that use that service account.
Why is this needed:
It enables users to see actual risks associated with a rule violation and not only the configuration based violation.
It also helps users to prioritize which rule/issue they'd like to attend first.
rbac-tool/cmd/visualize_cmd.go
Line 79 in 2833fda
Hello folks, to my understanding, the --exclude-namespaces
option exculdes namespaces from visualization. However, the output of the visualuze --help
command says that it's a "Comma-delimited list of namespaces to include in the visualization (default "kube-system")". If it's indeed a mistake, I would be happy to submit a PR for fixing it :)
Not sure if a question or an enhancement request, but I was a bit surprised to see that the rbac-tool lookup
output doesn't show the corresponding [Cluster]RoleBinding
s associating the given ServiceAccount
with the outputted [Cluster]Role
s. I've looked at rbac-tool lookup --help
, but didn't see anything relevant. Is this not possible currently?
My use case is that I already know what [Cluster]Role
s the ServiceAccount
is associated with, but I don't know from which [Cluster]RoleBinding
s, if that makes sense.
What would you like to be added:
It would be nice to add subresources support to RBAC generation fuctional.
Why is this needed:
It can make generation rules useful =)
Now I have to rewrite them manually after generation.
What would you like to be added:
When performing a
rbac-tool policy-rules {serviceAccount}
I would like to have 2 columns at the end, where for each action it shows from which (cluster)role it gets the right to do it. For example:
Why is this needed:
If you want to manage (specifically remove for my case) an action that a SA can perform on a resource, it would be neat to see from which (cluster)roles this service account gets its rights.
The utility throws a segmentation fault on a MacBook Pro (darwin/amd64).
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 9446 100 9446 0 0 33843 0 --:--:-- --:--:-- --:--:-- 34727
alcideio/rbac-tool info checking GitHub for latest tag
alcideio/rbac-tool info found version: 1.13.0 for v1.13.0/darwin/amd64
alcideio/rbac-tool info installed ./bin/rbac-tool
❯ ./bin/rbac-tool version
[1] 20107 segmentation fault ./bin/rbac-tool version
❯ ./bin/rbac-tool help
[1] 20243 segmentation fault ./bin/rbac-tool help
)❯ ./bin/rbac-tool
[1] 20310 segmentation fault ./bin/rbac-tool
What happened:
Currently some resources, like events.events.k8s.io or nodes.metrics.k8s.io down show ub in the "show" output, despite existing in cluster
What you expected to happen:
"show" includes those ApiGroups and resources
How to reproduce it (as minimally and precisely as possible):
Have an plain upstream cluster for events.events.k8s.io, use the metrics-sever for metrics.k8s.io
Anything else we need to know?:
n/a
Environment:
kubectl version
):What would you like to be added:
@gadinaor @austinpray-mixpanel, could ARM releases be made available for the latest version? It should be a quick GoReleaser config change.
Per k8s rbac documentation there special cases
The following cases needs to be covered:
Reference: https://www.impidio.com/blog/kubernetes-rbac-security-pitfalls
What would you like to be added:
Add flags to customize:
Why is this needed:
For the rbac-tool gen
and rbac-tool show
commands it would be useful for automation to be able to customize the object metadata during role generation.
For example:
# Generate a ClusterRole with all the available permissions for core and apps api groups
rbac-tool show \
--for-groups=,apps \
--scope namespace \
--name foo \
--namespace bar \
--annotations argocd.argoproj.io/sync-wave=2,rbac.authorization.kubernetes.io/autoupdate=true
With these flags it would be possible to generate fully functional roles without having to make modifications to the YAML after running the tool.
What happened:
[130] % rbac-tool viz --outformat dot
[alcide-rbactool] Namespaces included '*'
[alcide-rbactool] Namespaces excluded 'kube-system'
[alcide-rbactool] Connecting to cluster ''
[alcide-rbactool] Generating Graph and Saving as 'rbac.html'
[0] % head -2 rbac.html
digraph {
subgraph cluster_s296 {
What you expected to happen:
I expect to name a dot file .dot
:-D
Looks like connected to issue 8
How to reproduce it (as minimally and precisely as possible):
rbac-tool viz --outformat dot
Anything else we need to know?:
Dot itself renders the file fine, it looks just like a file nameing error.
It would be also nice to see the used version at least in -h
Environment:
kubectl version
): irrelevant, happens with 1.15.11 and also 1.18.xWhat would you like to be added:
I'd like `rbac-tool analyze' warn about (Cluster)Rolebindings for accounts that don't or no longer exist in the cluster.
Why is this needed:
Unnecessary permissions are a security risk and should be audited.
What happened:
Running the following command within a k8s container fails:
$ rbac-tool who-can create mysqlinstances.database.orange.com
[...]
Failed to run program - memory budget exceeded (6:24)
| { .Verb in [Verb, "*"] and
| .......................^
within htop, I see 6 processes with VIRT to 1.3 GB prior to the crash
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
$ rbac-tool who-can create mysqlinstances.database.orange.com -v 9
[...]
I0301 11:09:54.444305 1881 subject_permissions.go:72] {Kind:ServiceAccount APIGroup: Name:deployer [...]
Failed to run program - memory budget exceeded (6:24)
| { .Verb in [Verb, "*"] and
| .......................^
Environment:
kubectl version
):I tried to install the plugin today and I got this error message on my M1 Mac
kubectl krew install rbac-tool
Updated the local copy of plugin index.
Installing plugin: rbac-tool
W0719 13:57:13.752477 51960 install.go:164] failed to install plugin "rbac-tool": plugin "rbac-tool" does not offer installation for this platform
F0719 13:57:13.752552 51960 root.go:79] failed to install some plugins: [rbac-tool]: plugin "rbac-tool" does not offer installation for this platform
It would be great if the rbac tool would support the M1 Mac Arm64 platform.
What would you like to be added:
It would be great if rbac-tool could be installed via brew.
Why is this needed:
This would be great, because brew makes it possible to install the shell autocompletion automatically.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.