sbstp / kubie Goto Github PK
View Code? Open in Web Editor NEWA more powerful alternative to kubectx and kubens
Home Page: https://blog.sbstp.ca/introducing-kubie/
License: zlib License
A more powerful alternative to kubectx and kubens
Home Page: https://blog.sbstp.ca/introducing-kubie/
License: zlib License
This is similar to how kubectl completion has a __start_kubectl
and a _complete_alias
.
Doing this would allow
kctx [tab]
kns [tab]
etc.
Delete a context. There should also be a garbage collection cycle after deleting the context. If there's no more contexts referencing the user and cluster, they should be removed. If the file containing the context is now empty, delete it as well.
I think bash autocomplete is great, but for someone like me using zsh it's difficult to remember yet another set of commands and flags :)
Do you think zsh autocompletion can be added as well?
I've recently migrated my zsh config to use zim
. Unfortunately, this broke kubie
because it expects to find certain files in known locations:
$ kubie ctx
/Users/c10l/.zshrc:source:102: no such file or directory: /var/folders/2l/z3r2cq7j3hd79c4091tq42hr0000gn/T/.tmpc9w8CC/.zim/zimfw.zsh
/Users/c10l/.zshrc:source:104: no such file or directory: /var/folders/2l/z3r2cq7j3hd79c4091tq42hr0000gn/T/.tmpc9w8CC/.zim/init.zsh
/Users/c10l/.zlogin:source:7: no such file or directory: /var/folders/2l/z3r2cq7j3hd79c4091tq42hr0000gn/T/.tmpc9w8CC/.zim/login_init.zsh
c10l@laptop ~ %
I managed to work around those errors by setting ZIM_HOME=/Users/c10l/.zim
but then something else broke as it tries to find .zimrc
under $HOME
, which is overridden at that point to the kubie temp dir:
$ kubie ctx
_zimfw_source_zimrc:source:3: no such file or directory: /var/folders/2l/z3r2cq7j3hd79c4091tq42hr0000gn/T/.tmp7RzL0l/.zimrc
Failed to source /var/folders/2l/z3r2cq7j3hd79c4091tq42hr0000gn/T/.tmp7RzL0l/.zimrc
/Users/c10l/.zim/modules/ohmyzsh/plugins/kubectl/kubectl.plugin.zsh:5: read-only file system: /kubectl_completion
Suggestion: allow me to give a list of files and directories to copy to the temp dir when switching contexts. This way I can specify .zimrc
and .zim/
which should make this work.
Since we never mutate the actual k8s configs, the last namespace is not remembered when you leave a context and enter it again. Add a state file which remembers the last namespace for each existing context.
I've received reports of the bash prompt being weird inside of a kubie shell in OS X, currently investigating the issue.
Instead of using the default user PS1
it uses a weird PS1
that shows bash's version.
It looks like: [context|namespace] bash-3.2$
I'd like to collect more info about this because I don't own a Mac. If anyone experiences this problem, please let me know.
kubie version: 0.8.1
os: debian 10
openssl:
$ dpkg -l | grep libssl
ii libssl1.1:amd64 1.1.1d-0+deb10u2 amd64 Secure Sockets Layer toolkit - shared libraries
error:
$ kubie --help
kubie: error while loading shared libraries: libssl.so.1.0.0: cannot open shared object file: No such file or directory
When using OIDC with refresh tokens, need to restart the session to update the token (if token was updated outside of the session by another client)
To reproduce: use k8s via OIDC in kubie session and regular one. Kubie session will break once the toke is refreshed.
Hey @sbstp ... What are your thoughts on this idea? I'm happy to submit a PR if you'll accept, figured we could chat about it here first.
Currently, we've got 10 contexts at work, and I've been trying to get used to using kubie exec *
to run commands across all of the contexts, however, I'm running into some trouble with understanding which resources are coming from which cluster/context.
Example from kubie 0.13.4:
$ kubie exec *-admin default kubectl get scaledobject -o wide --all-namespaces
NAMESPACE NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE AGE
monitoring sqs-queue-foo apps/v1.Deployment foo-app 0 1 aws-sqs-queue True False 9d
No resources found
No resources found
No resources found
No resources found
No resources found
No resources found
NAMESPACE NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE AGE
monitoring sqs-queue-foo apps/v1.Deployment foo-app 0 1 aws-sqs-queue True False 9d
NAMESPACE NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE AGE
monitoring sqs-queue-foo apps/v1.Deployment foo-app 0 1 aws-sqs-queue True False 9d
No resources found
As you see, it's hard to know which resources are from which cluster/context.
I'd like to propose something like:
$ kubie exec *-admin default kubectl get scaledobject -o wide --all-namespaces
CONTEXT => dev-admin
NAMESPACE NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE AGE
monitoring sqs-queue-foo apps/v1.Deployment foo-app 0 1 aws-sqs-queue True False 9d
CONTEXT => mno-dev-admin
No resources found
CONTEXT => mno-prd-admin
No resources found
CONTEXT => mno-stg-admin
No resources found
CONTEXT => pqr-dev-admin
No resources found
CONTEXT => pqr-prd-admin
No resources found
CONTEXT => pqr-stg-admin
No resources found
CONTEXT => prd-admin
NAMESPACE NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE AGE
monitoring sqs-queue-foo apps/v1.Deployment foo-app 0 1 aws-sqs-queue True False 9d
CONTEXT => stg-admin
NAMESPACE NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE AGE
monitoring sqs-queue-foo apps/v1.Deployment foo-app 0 1 aws-sqs-queue True False 9d
CONTEXT => svc-admin
No resources found
I'm not set on the specific format... and I think having a flag option to disable it would be useful (or perhaps the other way around, though I think a better UX would be to print the context info by default). Let me know what you think about this one.
Still loving Kubie, slowly convincing my co-workers that it's awesome too!
Cheers,
Ty
I expect that when I call kubie ctx
my zsh history will be maintained into my sub shell, instead I do not have access to any previous commands.
Running kubie 0.9.0
on archlinux (5.6.10)
Steps to reproduce:
1.
$ kubie ctx test
$ kubie ns newns
$ kubectl delete ns newns
Now one cant switch to that context anymore:
$ kubie ctx test
Error: 'newns' is not a valid namespace for the context
kubie
uses some *nix specific features like file permissions and signal hooks that don't exist on Windows, and so it currently doesn't compile.
Hopefully those parts can be made optional in certain target configs (e.g. no need to set file permissions on Windows so it could be put behind an if !cfg(not(windows))
guard) to allow Windows releases to work as well.
I would like to have a 32-bit ARM binary for Raspberry Pi OS
I exported KUBECONFIG
as env var but kubie doesn't use it. It kept saying Error: Not in a kubie shell!
when I do kubie ns
. Here are my steps.
export KUBECONFIG=$HOME/.kube/kind
kubie ns
Error above thrown.
Hi there;
New kubie user here. I use iTerm2 and multiple session panes in the same window. Kubie has been a life saver, allowing me to have a pane for each cluster I'm working on and work on them in parallel using iTerm2's Broadcast Input feature. However, I ran into an issue today where I ran kubie ns foo
in parallel that caused all of the kubie sessions to update the state.json file at the same time, which corrupted it.
The error I received later on when I tried to enter a context in a new terminal window was not super helpful in pointing me to what the underlying problem was either:
$ kubie ctx foo
Error: trailing characters at line 1 column 102
I was able to find the state.json file and simply delete it, which allowed kubie to continue working for me again.
I think kubie probably needs some sort of file lock or some other mechanism to avoid parallel invocations of kubie from corrupting the state.json file, and I think kubie could provide a little more context around that particular error, perhaps even printing the path to the state.json file it's encountering the error in as well. (I had originally looked for it at ~/.local/share/kubie/state.json
, but on my Mac the state.json was actually in ~/Library/Application Support/kubie/state.json
.
I've been looking for an excuse to learn Rust, so this may be it... but figured I'd start by raising an issue in case someone else is able/wants to look at it before I can get around to learning enough to figure it out and submit a PR.
Cheers,
Ty
Trying our company's k8 on my M1 at the moment and would love to use kubie. Does it already has a release binary for aarch64? And can we get the homebrew bottle updated to easily install it?
Normally when I use kubectl to interact with shared/production clusters, I use credentials for a user with read-only permissions. But the user has the ability to impersonate a user with admin-level permissions. So if I need to make a change, I can run something like:
kubectl --as=admin --as-group=system:masters delete pod ${pod_name}
It's kindof like using sudo
for executing commands with higher permissions.
It would be useful for me to be able to run something like this:
kubie ctx --as=admin --as-group=system:masters --as-reason="deleting stuck pod"
That would give me a shell where the temporary kubeconfig file would have:
users:
- name: user-as-admin
user:
# Normal credentials here, could be client certs, exec, whatever
username: foo
password: password123
# Impersonation information below
as: admin
as-groups:
- system:masters
as-user-extra:
reason:
- deleting stuck pod
Then I could run a few commands with those credentials, and then exit out of the shell.
This would also be useful for CLI tools like istioctl
that do not support passing --as
as a command-line flag.
Hi @sbstp ,
If KUBECONFIG variable become invalid, kubie isn't functioning anymore.
[13:08:30 ~ 0] >> pierre $ kubie ctx api-prod
[api-prod|monitoring] [13:08:32 ~ 0] >> pierre $ export KUBECONFIG="/dev/null"
Error: EOF while parsing a value
Error: EOF while parsing a value
[|] [13:08:45 ~ 0] >> pierre $
Error: EOF while parsing a value
Error: EOF while parsing a value
[|] [13:08:48 ~ 0] >> pierre $ kubie ctx a
Error: Could not find context a
Error: EOF while parsing a value
Error: EOF while parsing a value
[|] [13:08:52 ~ 1] >> pierre $ kubie ctx api-prod
Error: EOF while parsing a value
Error: EOF while parsing a value
[|] [13:08:57 ~ 0] >> pierre $
We could use a KUBIE_KUBECONFIG variable to avoid this.
What do you think ?
Thanks,
Pierre
It would be possible to usee Fish as a subshell?
https://github.com/lotabout/skim
Downside: potential impact on binary size
It's unable to enter context which is scopet only to one namespace:
kubie ctx gitlab@cluster
Error: Error calling kubectl: Error from server (Forbidden): namespaces is forbidden: User "system:serviceaccount:klum:gitlab cannot list resource "namespaces" in API group "" at the cluster scope
where gitlab@cluster
has role like this:
rules:
- apiGroups:
- ""
- extensions
- apps
resources:
- '*'
verbs:
- '*'
- apiGroups:
- batch
resources:
- jobs
- cronjobs
verbs:
- '*'
$ wc -l .zsh_history
1 .zsh_history
$ tail -6 .zsh_history
: 1626419269:0;echo "Hello, World!"
$ kubie ctx foo
$ kubectl get pods -l app=bar -o wide
$ exit # exit sub-shell
$ tail -6 .zsh_history
: 1626419269:0;echo "Hello, World!"
: 1626418505:0;kubie ctx
: 1626419248:0;tail -6 .zsh_history
It would be a great feature if we are able to pass our command history to main shell in order to keep history maintained from sub-shells. Currently, we do not have access to any previous commands after we exit the sub-shell.
kubie enter <k8s-config> [-n <context_name>]
Hello,
Would it be possible to add a homebrew formula when you release versions of this application? Would allow installing and updating to be easy for users.
I tried to run kubie
(v0.11.1) on my alpine based jumphost and it doesn't work.
On my fedora and ubuntu based workstations it works well.
Maybe it would be a good idea to statically link all needed stuff as kubectl, helm and so on are all statically linked.
Here's the output of ldd
:
$ ldd bin/kubie-linux-amd64
/lib64/ld-linux-x86-64.so.2 (0x7f7695b61000)
libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x7f7695657000)
libc.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7f7695b61000)
libpthread.so.0 => /lib64/ld-linux-x86-64.so.2 (0x7f7695b61000)
libdl.so.2 => /lib64/ld-linux-x86-64.so.2 (0x7f7695b61000)
Error relocating bin/kubie-linux-amd64: __res_init: symbol not found
Error relocating bin/kubie-linux-amd64: __register_atfork: symbol not found
kubie ctx
spawns a shell with the current context (or prompt a fzf if you got multiple) and uses the last used namespace automatically.
kubie
shows the help.
My typical development flow is using kubie ctx <context>
(context is the same most of the time) to get into a shell. So i made myself a short alias to kubie ctx <context>
in order to jump fast into my shell but then i need another alias for kubie
or even kubie ns
to switch my namespace.
so my idea: what about just executing kubie
spawns a shell with the last context and namespace you used? or shows help if there is not last context? that would mean you only have to alias kubie
to kci
for example.
run kci
-> get into shell. run kci ns <whatever>
-> new namespace. run kci ctx <whatever>
get into another context, and also get into that context the next time you just run kci
.
what do you think about that idea ?
Hi, I would love to see a auto-update feature in this app.
o/
Hello i have few k8s clusters. Old clusters in version 1.13.12, and new in 1.16.13. When i execute command with kubectl version 1.16 kubectl describe ingress
on cluster 1.13, i have error:
Error from server (NotFound): the server could not find the requested resource
So i must use older kubectl version, for example 1.14. I want to kubie set my kubectl binary per cluster. Kubie can read this kubectl binary version from cluster config file.
This same problem i have with helm - on old clusters i use helm2, and on new i use helm3.
My config proposal is to add aliases configuration:
apiVersion: v1
kind: Config
clusters:
- cluster:
api-version: v1
certificate-authority-data:
server:
aliases:
kubectl: "~/bin/kubectl-1.14"
helm: "~/bin/helm2.14.1"
name:
contexts:
- context:
cluster:
user:
namespace:
name:
current-context:
users:
- name:
user:
client-certificate-data:
client-key-data:
Support spawning ZSH sub shells.
As mentioned in the recent blog post, zsh support is planned for the future.
I get error below when using Kubie on Archlinux. I found that Arch have libssl.so.1.1
but not libssl.so.1.0.0
and there seems to be no way to install that version. Can this be fixed? I really like Kubie. Thanks.
kubie: error while loading shared libraries: libssl.so.1.0.0: cannot open shared object file: No such file or directory
Looks like kubie doesn't handle copying the certs/keys to /tmp folder. The error below indicates that. Please see if you can fix this. Thanks.
Great work btw.
[minikube|default] ➜ .kube git:(master) k get ns
Error in configuration:
* unable to read client-cert /tmp/minikube-client.crt for minikube due to open /tmp/minikube-client.crt: no such file or directory
* unable to read client-key /tmp/minikube-client.key for minikube due to open /tmp/minikube-client.key: no such file or directory
* unable to read certificate-authority /tmp/minikube-ca.crt for minikube due to open /tmp/minikube-ca.crt: no such file or directory
Is there a solution to set the last context selected as default one for each new terminal opened ?
Add the ability to disable the kubie prompt. If the shell aleady has a prompt it can quite verbose.
In my environment, I have two files in my KUBECONFIG env var (essentially one local copy and another in version control, organisation-wide) separated by a colon. Both kubie and kubectl can work with contexts that are solely in a single file, however kubie (0.15.1) throws an error if a context refers a user that is present in the other file. In comparison, kubectl (v1.21-beta.0) handles this situation correctly.
For example if you run kubie exec random random echo random
there will be shell output informing the user that the context does not exist.
This would be a nice QoL improvement for debugging scripts that use the kubie exec
command.
So first off may I just say wow your tool is awesome and you've put a lot of attention into it. I've already switched over to using it and trying to get my colleagues on it too being able to work multiple clusters is a must for my daily work so this has been amazing.
I was curious as to why you are creating a new shell whenever you switch to the new K8 cluster. I admit I haven't had time to fully dig through your code, but it seems like you are using a combination of new kubeconfig files stored in temp directories and environment variables to allow multiple clusters in different terminal windows, which is really cool idea.
However I was wondering why open a new shell, for me the additional shell adds a significant "switch" time to the process and I'm not clear why as just starting a new zsh
doesn't take as long. I was curious what if creating a new shell you use kubie
to copy and create the new kube-config files like you do, but instead of opening a new shell with environment variables, when if you output those environment variables when you exit your app. Then use an eval
call to execute the commands.
For example something like this:
function kubie() {
25 eval $(cd /opt/tools/kubie ; kubie $@)
26 }
When your app exits it outputs export KUBE_CONFIG=/tmp/path/file
and the eval would trigger and set the values within the current shell, then any new execution of a new cluster switch would delete the previous files/create new ones.
I realize using the function eval call is a bit weird, but I've used it for some small tools I've used with great success, but I was curious if there were specific reasons why you wanted to spawn a new shell.
Thanks again great tool, just curious to understand better. :)
I think bash autocomplete is great, but for someone like me using fish it's difficult to remember yet another set of commands and flags :)
Do you think fish autocompletion can be added as well?
Please force permission on temporary KUBECONFIG to be not group/world readable, this should not follow system/user umask.
$ umask
0022
$ kubie ctx default
$ echo $KUBECONFIG
/tmp/kubie-config1585937564032423433.yaml
$ ls -la /tmp/kubie-config1585937564032423433.yaml
-rw-r--r-- 1 user group 1109 Apr 3 20:12 /tmp/kubie-config1585937564032423433.yaml
The information can be stored in a temporary file.
Hi there,
I was interested in building a plugin for asdf, but I noticed that the rust binaries are rather awkwardly kept in the git repository itself.
It would be much nicer if instead they were deployed to the github releases section. This would allow me to use Github's API to produce a list of version numbers and to install binaries.
☸ kind-kind (default) in ~ took 2m47s
$ k get <TAB> # works
$ kubie ctx foo # new shell
$ k get <TAB> # does not work
$ zsh --version
zsh 5.8 (x86_64-apple-darwin20.1.0)
Any ideas?
I am connecting to some clusters where I don't have the right to switch namespaces, so I expect kubie ns
by itself to not work. However, I think that kubie ns NAMESPACE
should work without trying to list the namespaces. Right now it fails with:
Error: Error calling kubectl: Error from server (Forbidden): namespaces is forbidden: User "SOME-USER" cannot list resource "namespaces" in API group "" at the cluster scope
i.e. if we do not want to print these out:
$ kubie ctx
Error loading kubeconfig /Users/furkan.turkal/.kube/http-cache: Is a directory (os error 21)
Error loading kubeconfig /Users/furkan.turkal/auth/foo/bar/MY_CONFIG: duplicate field `clusters` at line 1 column 11
Error loading kubeconfig /Users/furkan.turkal/.kube/kubectx: invalid type: string "kubernetes-admin@bar", expected struct KubeConfig at line 1 column 1
Error loading kubeconfig /Users/furkan.turkal/.kube/kubens: Is a directory (os error 21)
Error loading kubeconfig /Users/furkan.turkal/.kube/cache: Is a directory (os error 21)
Selection cancelled.
What about:
--quiet
? 🤔
Since upgrade to kubie 0.15.2 my kubie ctx command failed with
/var/folders/g3/t0x1k_1d3_g9jqwr4nt67fkw0000gn/T/.tmpsP4WBJ/.zshrc:source:28: too many open files: /etc/zprofile
/var/folders/g3/t0x1k_1d3_g9jqwr4nt67fkw0000gn/T/.tmpsP4WBJ/.zshrc:source:38: too many open files: /etc/zshrc
/var/folders/g3/t0x1k_1d3_g9jqwr4nt67fkw0000gn/T/.tmpsP4WBJ/.zshrc:source:44: too many open files: /var/folders/g3/t0x1k_1d3_g9jqwr4nt67fkw0000gn/T/.tmpsP4WBJ/.zshrc
i try to reset my entire zsh env , same issue
i try on an entire new mac same issue
downgrade to 0.15.1 solve the problem
i'm using macOS 12.0.1 with default shell
My Fedora 33 system defaults to /tmp
being a tmpfs
filesystem, causing:
$ kubie ns default
Error: Failed to write state to '/home/parse/.local/share/kubie/state.json'
Caused by:
0: failed to persist temporary file: Invalid cross-device link (os error 18)
1: Invalid cross-device link (os error 18)
Curiously, I still get an error with TMPDIR=<foo> kubie ...
:
$ TMPDIR=/home/parse/tmp kubie ns default
Error: failed to persist temporary file: Invalid cross-device link (os error 18)
Caused by:
Invalid cross-device link (os error 18)
Hi guys!
I am using kubie 0.13.3, but I cannot seem to get it working. When I try to swit to a new context, it fails with "no such file or directory".
$ kubie --version
kubie 0.13.3
$ kubie ctx
Error: No such file or directory (os error 2)
I went back to 0.13.2 and it works properly. If you let me know how to increase the log level, I can share more information.
Subshell ps1 config would allow an extra layer of configuration.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.