Code Monkey home page Code Monkey logo

kubie's People

Contributors

ahermant avatar arseeeen avatar brandon-at-wrk avatar c10l avatar dependabot[bot] avatar edude03 avatar eitanya avatar enricomarchesin avatar evilhamsterman avatar felixonmars avatar fsommar avatar gsstuart avatar hall avatar herbygillot avatar hiro-o918 avatar idebeijer avatar kfkonrad avatar krdln avatar matthewhembree avatar mattrcampbell avatar miuler avatar orhun avatar pxtxs avatar sbstp avatar terinjokes avatar trecloux avatar tybrown avatar zebradil avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubie's Issues

Feature request: delete context

Delete a context. There should also be a garbage collection cycle after deleting the context. If there's no more contexts referencing the user and cluster, they should be removed. If the file containing the context is now empty, delete it as well.

Feature request: zsh autocomplete

I think bash autocomplete is great, but for someone like me using zsh it's difficult to remember yet another set of commands and flags :)
Do you think zsh autocompletion can be added as well?

Command prompt is broken when using zim on zshell

I've recently migrated my zsh config to use zim. Unfortunately, this broke kubie because it expects to find certain files in known locations:

$ kubie ctx
/Users/c10l/.zshrc:source:102: no such file or directory: /var/folders/2l/z3r2cq7j3hd79c4091tq42hr0000gn/T/.tmpc9w8CC/.zim/zimfw.zsh
/Users/c10l/.zshrc:source:104: no such file or directory: /var/folders/2l/z3r2cq7j3hd79c4091tq42hr0000gn/T/.tmpc9w8CC/.zim/init.zsh
/Users/c10l/.zlogin:source:7: no such file or directory: /var/folders/2l/z3r2cq7j3hd79c4091tq42hr0000gn/T/.tmpc9w8CC/.zim/login_init.zsh
c10l@laptop ~ %

I managed to work around those errors by setting ZIM_HOME=/Users/c10l/.zim but then something else broke as it tries to find .zimrc under $HOME, which is overridden at that point to the kubie temp dir:

$ kubie ctx
_zimfw_source_zimrc:source:3: no such file or directory: /var/folders/2l/z3r2cq7j3hd79c4091tq42hr0000gn/T/.tmp7RzL0l/.zimrc
Failed to source /var/folders/2l/z3r2cq7j3hd79c4091tq42hr0000gn/T/.tmp7RzL0l/.zimrc
/Users/c10l/.zim/modules/ohmyzsh/plugins/kubectl/kubectl.plugin.zsh:5: read-only file system: /kubectl_completion

Suggestion: allow me to give a list of files and directories to copy to the temp dir when switching contexts. This way I can specify .zimrc and .zim/ which should make this work.

Feature request: remember last namespace in context

Since we never mutate the actual k8s configs, the last namespace is not remembered when you leave a context and enter it again. Add a state file which remembers the last namespace for each existing context.

[Help needed] Weird prompt on OS X

I've received reports of the bash prompt being weird inside of a kubie shell in OS X, currently investigating the issue.

Instead of using the default user PS1 it uses a weird PS1 that shows bash's version.

It looks like: [context|namespace] bash-3.2$

I'd like to collect more info about this because I don't own a Mac. If anyone experiences this problem, please let me know.

kubie: error while loading shared libraries: libssl.so.1.0.0

kubie version: 0.8.1
os: debian 10
openssl:

$ dpkg -l | grep libssl
ii  libssl1.1:amd64                      1.1.1d-0+deb10u2                   amd64        Secure Sockets Layer toolkit - shared libraries

error:

$ kubie --help
kubie: error while loading shared libraries: libssl.so.1.0.0: cannot open shared object file: No such file or directory

OIDC support

When using OIDC with refresh tokens, need to restart the session to update the token (if token was updated outside of the session by another client)

To reproduce: use k8s via OIDC in kubie session and regular one. Kubie session will break once the toke is refreshed.

Feature Suggestion: Print context names when using `kubie exec` with wildcard

Hey @sbstp ... What are your thoughts on this idea? I'm happy to submit a PR if you'll accept, figured we could chat about it here first.

Currently, we've got 10 contexts at work, and I've been trying to get used to using kubie exec * to run commands across all of the contexts, however, I'm running into some trouble with understanding which resources are coming from which cluster/context.

Example from kubie 0.13.4:

$ kubie exec *-admin default kubectl get scaledobject -o wide --all-namespaces
NAMESPACE    NAME            SCALETARGETKIND      SCALETARGETNAME     MIN   MAX   TRIGGERS        AUTHENTICATION   READY   ACTIVE   AGE
monitoring   sqs-queue-foo   apps/v1.Deployment   foo-app             0     1     aws-sqs-queue                    True    False    9d
No resources found
No resources found
No resources found
No resources found
No resources found
No resources found
NAMESPACE    NAME            SCALETARGETKIND      SCALETARGETNAME   MIN   MAX   TRIGGERS        AUTHENTICATION   READY   ACTIVE   AGE
monitoring   sqs-queue-foo   apps/v1.Deployment   foo-app           0     1     aws-sqs-queue                    True    False    9d
NAMESPACE    NAME            SCALETARGETKIND      SCALETARGETNAME   MIN   MAX   TRIGGERS        AUTHENTICATION   READY   ACTIVE   AGE
monitoring   sqs-queue-foo   apps/v1.Deployment   foo-app           0     1     aws-sqs-queue                    True    False    9d
No resources found

As you see, it's hard to know which resources are from which cluster/context.

I'd like to propose something like:

$ kubie exec *-admin default kubectl get scaledobject -o wide --all-namespaces
CONTEXT => dev-admin
NAMESPACE    NAME            SCALETARGETKIND      SCALETARGETNAME     MIN   MAX   TRIGGERS        AUTHENTICATION   READY   ACTIVE   AGE
monitoring   sqs-queue-foo   apps/v1.Deployment   foo-app             0     1     aws-sqs-queue                    True    False    9d

CONTEXT => mno-dev-admin
No resources found

CONTEXT => mno-prd-admin
No resources found

CONTEXT => mno-stg-admin
No resources found

CONTEXT => pqr-dev-admin
No resources found

CONTEXT => pqr-prd-admin
No resources found

CONTEXT => pqr-stg-admin
No resources found

CONTEXT => prd-admin
NAMESPACE    NAME            SCALETARGETKIND      SCALETARGETNAME   MIN   MAX   TRIGGERS        AUTHENTICATION   READY   ACTIVE   AGE
monitoring   sqs-queue-foo   apps/v1.Deployment   foo-app           0     1     aws-sqs-queue                    True    False    9d

CONTEXT => stg-admin
NAMESPACE    NAME            SCALETARGETKIND      SCALETARGETNAME   MIN   MAX   TRIGGERS        AUTHENTICATION   READY   ACTIVE   AGE
monitoring   sqs-queue-foo   apps/v1.Deployment   foo-app           0     1     aws-sqs-queue                    True    False    9d

CONTEXT => svc-admin
No resources found

I'm not set on the specific format... and I think having a flag option to disable it would be useful (or perhaps the other way around, though I think a better UX would be to print the context info by default). Let me know what you think about this one.

Still loving Kubie, slowly convincing my co-workers that it's awesome too!

Cheers,
Ty

Deleting current namespace will leave context unusable

Running kubie 0.9.0 on archlinux (5.6.10)

Steps to reproduce:
1.

$ kubie ctx test
$ kubie ns newns
  1. I did it in another shell, but not sure if that matters.
$ kubectl delete ns newns

Now one cant switch to that context anymore:

$ kubie ctx test
Error: 'newns' is not a valid namespace for the context

Feature Request: Windows support

kubie uses some *nix specific features like file permissions and signal hooks that don't exist on Windows, and so it currently doesn't compile.

Hopefully those parts can be made optional in certain target configs (e.g. no need to set file permissions on Windows so it could be put behind an if !cfg(not(windows)) guard) to allow Windows releases to work as well.

Kubie not using KUBECONFIG environment variable

I exported KUBECONFIG as env var but kubie doesn't use it. It kept saying Error: Not in a kubie shell! when I do kubie ns. Here are my steps.

  1. export KUBECONFIG=$HOME/.kube/kind
  2. kubie ns

Error above thrown.

Using multiple sessions in parallel can corrupt kubie state.json

Hi there;

New kubie user here. I use iTerm2 and multiple session panes in the same window. Kubie has been a life saver, allowing me to have a pane for each cluster I'm working on and work on them in parallel using iTerm2's Broadcast Input feature. However, I ran into an issue today where I ran kubie ns foo in parallel that caused all of the kubie sessions to update the state.json file at the same time, which corrupted it.

The error I received later on when I tried to enter a context in a new terminal window was not super helpful in pointing me to what the underlying problem was either:

$ kubie ctx foo
Error: trailing characters at line 1 column 102

I was able to find the state.json file and simply delete it, which allowed kubie to continue working for me again.

I think kubie probably needs some sort of file lock or some other mechanism to avoid parallel invocations of kubie from corrupting the state.json file, and I think kubie could provide a little more context around that particular error, perhaps even printing the path to the state.json file it's encountering the error in as well. (I had originally looked for it at ~/.local/share/kubie/state.json, but on my Mac the state.json was actually in ~/Library/Application Support/kubie/state.json.

I've been looking for an excuse to learn Rust, so this may be it... but figured I'd start by raising an issue in case someone else is able/wants to look at it before I can get around to learning enough to figure it out and submit a PR.

Cheers,
Ty

No homebrew aarch64 (Apple Silicon) bottle

Trying our company's k8 on my M1 at the moment and would love to use kubie. Does it already has a release binary for aarch64? And can we get the homebrew bottle updated to easily install it?

Feature request: allow impersonation with temporary context

Normally when I use kubectl to interact with shared/production clusters, I use credentials for a user with read-only permissions. But the user has the ability to impersonate a user with admin-level permissions. So if I need to make a change, I can run something like:

kubectl --as=admin --as-group=system:masters delete pod ${pod_name}

It's kindof like using sudo for executing commands with higher permissions.

It would be useful for me to be able to run something like this:

kubie ctx --as=admin --as-group=system:masters --as-reason="deleting stuck pod"

That would give me a shell where the temporary kubeconfig file would have:

users:
- name: user-as-admin
  user:
    # Normal credentials here, could be client certs, exec, whatever
    username: foo
    password: password123
    # Impersonation information below
    as: admin
    as-groups:
      - system:masters
    as-user-extra:
      reason:
        - deleting stuck pod

Then I could run a few commands with those credentials, and then exit out of the shell.

This would also be useful for CLI tools like istioctl that do not support passing --as as a command-line flag.

Errors after modifying KUBECONFIG variable

Hi @sbstp ,
If KUBECONFIG variable become invalid, kubie isn't functioning anymore.

[13:08:30 ~ 0] >> pierre $ kubie ctx api-prod 
[api-prod|monitoring] [13:08:32 ~ 0] >> pierre $ export KUBECONFIG="/dev/null"
Error: EOF while parsing a value
Error: EOF while parsing a value
[|] [13:08:45 ~ 0] >> pierre $ 
Error: EOF while parsing a value
Error: EOF while parsing a value
[|] [13:08:48 ~ 0] >> pierre $ kubie ctx a
Error: Could not find context a
Error: EOF while parsing a value
Error: EOF while parsing a value
[|] [13:08:52 ~ 1] >> pierre $ kubie ctx api-prod
Error: EOF while parsing a value
Error: EOF while parsing a value
[|] [13:08:57 ~ 0] >> pierre $ 

We could use a KUBIE_KUBECONFIG variable to avoid this.
What do you think ?

Thanks,
Pierre

Fish support

It would be possible to usee Fish as a subshell?

kubie ctx needs cluster admin role

It's unable to enter context which is scopet only to one namespace:

kubie ctx gitlab@cluster
Error: Error calling kubectl: Error from server (Forbidden): namespaces is forbidden: User "system:serviceaccount:klum:gitlab cannot list resource "namespaces" in API group "" at the cluster scope

where gitlab@cluster has role like this:

rules:
- apiGroups:
  - ""
  - extensions
  - apps
  resources:
  - '*'
  verbs:
  - '*'
- apiGroups:
  - batch
  resources:
  - jobs
  - cronjobs
  verbs:
  - '*'

Feature Request: Consider support history maintaining to main-shell from sub-shells

$ wc -l .zsh_history
   1 .zsh_history

$ tail -6 .zsh_history
: 1626419269:0;echo "Hello, World!"
$ kubie ctx foo
$ kubectl get pods -l app=bar -o wide
$ exit  #  exit sub-shell
$ tail -6 .zsh_history
: 1626419269:0;echo "Hello, World!"
: 1626418505:0;kubie ctx
: 1626419248:0;tail -6 .zsh_history

It would be a great feature if we are able to pass our command history to main shell in order to keep history maintained from sub-shells. Currently, we do not have access to any previous commands after we exit the sub-shell.

Hombrew support

Hello,

Would it be possible to add a homebrew formula when you release versions of this application? Would allow installing and updating to be easy for users.

Support alpine resp. musl or link it statically

I tried to run kubie (v0.11.1) on my alpine based jumphost and it doesn't work.
On my fedora and ubuntu based workstations it works well.

Maybe it would be a good idea to statically link all needed stuff as kubectl, helm and so on are all statically linked.
Here's the output of ldd:

$ ldd bin/kubie-linux-amd64 
	/lib64/ld-linux-x86-64.so.2 (0x7f7695b61000)
	libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x7f7695657000)
	libc.so.6 => /lib64/ld-linux-x86-64.so.2 (0x7f7695b61000)
	libpthread.so.0 => /lib64/ld-linux-x86-64.so.2 (0x7f7695b61000)
	libdl.so.2 => /lib64/ld-linux-x86-64.so.2 (0x7f7695b61000)
Error relocating bin/kubie-linux-amd64: __res_init: symbol not found
Error relocating bin/kubie-linux-amd64: __register_atfork: symbol not found

spawn last shell as default command

kubie ctx spawns a shell with the current context (or prompt a fzf if you got multiple) and uses the last used namespace automatically.

kubie shows the help.

My typical development flow is using kubie ctx <context> (context is the same most of the time) to get into a shell. So i made myself a short alias to kubie ctx <context> in order to jump fast into my shell but then i need another alias for kubie or even kubie ns to switch my namespace.

so my idea: what about just executing kubie spawns a shell with the last context and namespace you used? or shows help if there is not last context? that would mean you only have to alias kubie to kci for example.
run kci -> get into shell. run kci ns <whatever> -> new namespace. run kci ctx <whatever> get into another context, and also get into that context the next time you just run kci.

what do you think about that idea ?

Feature requests: change kubectl binary version per cluster

Hello i have few k8s clusters. Old clusters in version 1.13.12, and new in 1.16.13. When i execute command with kubectl version 1.16 kubectl describe ingress on cluster 1.13, i have error:
Error from server (NotFound): the server could not find the requested resource

So i must use older kubectl version, for example 1.14. I want to kubie set my kubectl binary per cluster. Kubie can read this kubectl binary version from cluster config file.

This same problem i have with helm - on old clusters i use helm2, and on new i use helm3.

My config proposal is to add aliases configuration:

apiVersion: v1
kind: Config
clusters:
  - cluster:
        api-version: v1
        certificate-authority-data: 
        server: 
        aliases: 
            kubectl: "~/bin/kubectl-1.14"
            helm:    "~/bin/helm2.14.1"
    name: 
contexts:
  - context:
        cluster: 
        user: 
        namespace: 
    name: 
current-context: 
users:
  - name: 
    user:
        client-certificate-data: 
        client-key-data: 

ZSH support

Support spawning ZSH sub shells.

As mentioned in the recent blog post, zsh support is planned for the future.

libssl.so.1.0.0: cannot open shared object file

I get error below when using Kubie on Archlinux. I found that Arch have libssl.so.1.1 but not libssl.so.1.0.0 and there seems to be no way to install that version. Can this be fixed? I really like Kubie. Thanks.

kubie: error while loading shared libraries: libssl.so.1.0.0: cannot open shared object file: No such file or directory

Error when not using inline certs/key

Looks like kubie doesn't handle copying the certs/keys to /tmp folder. The error below indicates that. Please see if you can fix this. Thanks.

Great work btw.

[minikube|default] ➜  .kube git:(master) k get ns
Error in configuration:
* unable to read client-cert /tmp/minikube-client.crt for minikube due to open /tmp/minikube-client.crt: no such file or directory
* unable to read client-key /tmp/minikube-client.key for minikube due to open /tmp/minikube-client.key: no such file or directory
* unable to read certificate-authority /tmp/minikube-ca.crt for minikube due to open /tmp/minikube-ca.crt: no such file or directory

kubie cannot find user referenced in separate kubeconfig file

In my environment, I have two files in my KUBECONFIG env var (essentially one local copy and another in version control, organisation-wide) separated by a colon. Both kubie and kubectl can work with contexts that are solely in a single file, however kubie (0.15.1) throws an error if a context refers a user that is present in the other file. In comparison, kubectl (v1.21-beta.0) handles this situation correctly.

Eval instead of new shell?

So first off may I just say wow your tool is awesome and you've put a lot of attention into it. I've already switched over to using it and trying to get my colleagues on it too being able to work multiple clusters is a must for my daily work so this has been amazing.

I was curious as to why you are creating a new shell whenever you switch to the new K8 cluster. I admit I haven't had time to fully dig through your code, but it seems like you are using a combination of new kubeconfig files stored in temp directories and environment variables to allow multiple clusters in different terminal windows, which is really cool idea.

However I was wondering why open a new shell, for me the additional shell adds a significant "switch" time to the process and I'm not clear why as just starting a new zsh doesn't take as long. I was curious what if creating a new shell you use kubie to copy and create the new kube-config files like you do, but instead of opening a new shell with environment variables, when if you output those environment variables when you exit your app. Then use an eval call to execute the commands.

For example something like this:

function kubie() {
   25    eval $(cd /opt/tools/kubie ; kubie $@)
   26  }

When your app exits it outputs export KUBE_CONFIG=/tmp/path/file and the eval would trigger and set the values within the current shell, then any new execution of a new cluster switch would delete the previous files/create new ones.

I realize using the function eval call is a bit weird, but I've used it for some small tools I've used with great success, but I was curious if there were specific reasons why you wanted to spawn a new shell.

Thanks again great tool, just curious to understand better. :)

Feature request: fish autocompletion

I think bash autocomplete is great, but for someone like me using fish it's difficult to remember yet another set of commands and flags :)
Do you think fish autocompletion can be added as well?

[Security] Temporary KUBECONFIG world readable

Please force permission on temporary KUBECONFIG to be not group/world readable, this should not follow system/user umask.

$ umask
0022
$ kubie ctx default
$ echo $KUBECONFIG
/tmp/kubie-config1585937564032423433.yaml
$ ls -la /tmp/kubie-config1585937564032423433.yaml
-rw-r--r-- 1 user group 1109 Apr  3 20:12 /tmp/kubie-config1585937564032423433.yaml

Feature Request: Use Github releases

Hi there,

I was interested in building a plugin for asdf, but I noticed that the rust binaries are rather awkwardly kept in the git repository itself.

It would be much nicer if instead they were deployed to the github releases section. This would allow me to use Github's API to produce a list of version numbers and to install binaries.

Allow switching namespaces without trying to list them

I am connecting to some clusters where I don't have the right to switch namespaces, so I expect kubie ns by itself to not work. However, I think that kubie ns NAMESPACE should work without trying to list the namespaces. Right now it fails with:

Error: Error calling kubectl: Error from server (Forbidden): namespaces is forbidden: User "SOME-USER" cannot list resource "namespaces" in API group "" at the cluster scope

Feature Request: consider add --quiet mode

i.e. if we do not want to print these out:

$ kubie ctx

Error loading kubeconfig /Users/furkan.turkal/.kube/http-cache: Is a directory (os error 21)
Error loading kubeconfig /Users/furkan.turkal/auth/foo/bar/MY_CONFIG: duplicate field `clusters` at line 1 column 11
Error loading kubeconfig /Users/furkan.turkal/.kube/kubectx: invalid type: string "kubernetes-admin@bar", expected struct KubeConfig at line 1 column 1
Error loading kubeconfig /Users/furkan.turkal/.kube/kubens: Is a directory (os error 21)
Error loading kubeconfig /Users/furkan.turkal/.kube/cache: Is a directory (os error 21)
Selection cancelled.

What about:

--quiet? 🤔

shell spawning issue with kubie 0.15.2

Since upgrade to kubie 0.15.2 my kubie ctx command failed with

/var/folders/g3/t0x1k_1d3_g9jqwr4nt67fkw0000gn/T/.tmpsP4WBJ/.zshrc:source:28: too many open files: /etc/zprofile
/var/folders/g3/t0x1k_1d3_g9jqwr4nt67fkw0000gn/T/.tmpsP4WBJ/.zshrc:source:38: too many open files: /etc/zshrc
/var/folders/g3/t0x1k_1d3_g9jqwr4nt67fkw0000gn/T/.tmpsP4WBJ/.zshrc:source:44: too many open files: /var/folders/g3/t0x1k_1d3_g9jqwr4nt67fkw0000gn/T/.tmpsP4WBJ/.zshrc

i try to reset my entire zsh env , same issue
i try on an entire new mac same issue

downgrade to 0.15.1 solve the problem

i'm using macOS 12.0.1 with default shell

Invalid cross-device link with `kubie ns`

My Fedora 33 system defaults to /tmp being a tmpfs filesystem, causing:

$ kubie ns default
Error: Failed to write state to '/home/parse/.local/share/kubie/state.json'

Caused by:
    0: failed to persist temporary file: Invalid cross-device link (os error 18)
    1: Invalid cross-device link (os error 18)

Curiously, I still get an error with TMPDIR=<foo> kubie ...:

$ TMPDIR=/home/parse/tmp kubie ns default
Error: failed to persist temporary file: Invalid cross-device link (os error 18)

Caused by:
    Invalid cross-device link (os error 18)

Error: No such file or directory

Hi guys!
I am using kubie 0.13.3, but I cannot seem to get it working. When I try to swit to a new context, it fails with "no such file or directory".

$ kubie --version
kubie 0.13.3
$ kubie ctx
Error: No such file or directory (os error 2)

I went back to 0.13.2 and it works properly. If you let me know how to increase the log level, I can share more information.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.