vmware-archive / kubecfg Goto Github PK
View Code? Open in Web Editor NEWA tool for managing complex enterprise Kubernetes environments as code.
License: Apache License 2.0
A tool for managing complex enterprise Kubernetes environments as code.
License: Apache License 2.0
For instance, if you have a massive config file in a config map and you change one line in that file, diff shows the whole file as changed.
Been trying to learn jsonnet via their tutorials. Should the bar_menu.6.jsonnet example from the jsonnet repo work?
$ pwd
/tmp/jsonnet/examples
$ git remote -v
origin https://github.com/google/jsonnet (fetch)
origin https://github.com/google/jsonnet (push)
$ git status
On branch master
Your branch is up-to-date with 'origin/master'.
nothing to commit, working tree clean
$ kubecfg show -o yaml -f bar_menu.6.jsonnet
ERROR Error reading bar_menu.6.jsonnet: Unexpected object structure: string
$ jsonnet bar_menu.6.jsonnet
{
"cocktails": {
"Cosmopolitan": {
"garnish": "Lime Wheel",
"ingredients": [
{
"kind": "Vodka",
"qty": 1.5
},
{
"kind": "Cointreau",
"qty": 1
},
{
"kind": "Cranberry Juice",
"qty": 2
},
{
"kind": "Lime Juice",
"qty": 1
}
],
"served": "Straight Up"
},
"Manhattan": {
"garnish": "Maraschino Cherry",
"ingredients": [
{
"kind": "Rye",
"qty": 2.5
},
{
"kind": "Sweet Red Vermouth",
"qty": 1
},
{
"kind": "Angostura",
"qty": "dash"
}
],
"served": "Straight Up"
},
"Vodka Martini": {
"garnish": "Olive",
"ingredients": [
{
"kind": "Vodka",
"qty": 2
},
{
"kind": "Dry White Vermouth",
"qty": 1
}
],
"served": "Straight Up"
}
}
}
$ kubecfg version
kubecfg version: (dev build)
jsonnet version: v0.9.4
client-go version: v1.6.8-beta.0+$Format:%h$
Currently the -n --namespace
option seems to be ignored in the use-cases which I have.
My main use-case is that I want to be able to:
$ kubecfg diff dev/myproject.jsonnet
$ kubecfg update dev/myproject.jsonnet
to view changes, then update my dev env - which works fine as the jsonnet doesn't define namespace explicitly so it's using my own from the current context. But I'd then like to:
$ kubecfg diff ci/myproject.jsonnet -n ci-namespace
to verify the changes when using the ci jsonnet. But this just diffs against my own namespace (similar if I kubecfg update -n nonexistentnamespace dev/myproject.jsonnet
it doesn't error but deploys using my namespace)
Currently the only way I can achieve this is by explicitly including the namespace metadata in the jsonnet, then call with:
$ kubecfg diff ci/myproject.jsonnet -V namespace=ci-namespace
or similar, but this then requires the same ext var for anyone deploying on dev.
With the example guestbook in the components/
directory, doing something like:
../kubecfg update dev -J lib -J ../lib
Results in an error that we can't find the example.libsonnet
file. Reversing the order of the -J
flags seems to cause kubecfg.libsonnet
to go missing.
Link your org's and project's "Website" settings to http://ksonnet.heptio.com/ to make things more discoverable.
I wanna build kubecfg locally on osx. By running make
I have the binary. However it doesn't work.
$ make
go build -ldflags="-X main.version=dev-2017-07-05T11:19:56+0700 " .
$ ./kubecfg -h
Killed: 9
This makes it clearer how to use it.
kubecfg show
(and jsonnet eval
) is basically the only tool for exploring evaluation and debugging errors. Currently kubecfg show
is awful for this.
kubecfg show
to work on jsonnet sub-expressions not just whole API objectsUnexpected object structure
error messagekubecfg show
command line (like jsonnet eval
)Right now an update shows:
$ ./kubecfg update examples/guestbook.jsonnet
I0627 10:08:39.214140 7694 update.go:57] Updating Service/redis-slave
I0627 10:08:39.277885 7694 update.go:70] Creating non-existent Service/redis-slave
I0627 10:08:39.296423 7694 update.go:57] Updating Service/frontend
...
Probably need to clean up a bit the output and make it more user friendly.
This involves:
kubecfg
README.kubecfg
to the ksonnet site (possibly move it to ksonnet.io).These were on my backburner, because I thought I didn't need them, but I ran into a situation today that required it. This issue is mostly for my benefit (I'll try to whip one out this weekend)
Should delete the objects created from a naive update --create
.
While we may not propose this to the k8s incubator, it would not hurt to use the basic skeleton.
This means including a CONTRIBUTING.md, a ROADMAP, a Code of conduct..etc
as described in:
https://github.com/kubernetes/community/blob/master/incubator.md
The behavior of the ksonnet delete
command is specified in the ksonnet.next
design doc from August 2017.
Bringing the command to specification implies the following work items:
The behavior of the ksonnet diff
command is specified in the ksonnet.next
design doc from August 2017.
Bringing the command to specification implies the following work items:
$ kubecfg version
kubecfg version: (dev build)
jsonnet version: v0.9.4
client-go version: v1.6.8-beta.0+$Format:%h$
Cobra seems like it makes this easy.
This is probably user error, but...
With the following simple testcase:
local k = import "ksonnet.beta.1/k.libsonnet";
k.core.v1.service.default("frontend") +
k.core.v1.service.mixin.spec.selector({name: "frontend"})
I get this error:
$ kubecfg -J production/ksonnet-lib show ./test.jsonnet
Error: Error reading ./test.jsonnet: RUNTIME ERROR: Field does not exist: selector
object <mixin>
production/ksonnet-lib/ksonnet.beta.1/core.v1.libsonnet:3379:31-44 object <anonymous>
During manifestation
However, it works with plain jsonnet:
$ jsonnet -J production/ksonnet-lib ./test.jsonnet
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"name": "frontend",
"namespace": "default"
},
"spec": {
"selector": {
"name": "frontend"
}
},
"status": { }
}
The problem seems to stem from the fact that service objects created with default
have empty spec
objects, and the +:
operation is expecting it? if I change the input to be:
local k = import "ksonnet.beta.1/k.libsonnet";
k.core.v1.service.default("frontend") +
{
spec+: k.core.v1.serviceSpec.default() +
k.core.v1.serviceSpec.selector({name: "frontend"})
}
It works. Should the default()
function for service be calling the default()
function for serviceSpec, instead of initialising it to an empty object?
The ksonnet init
command is specified in the ksonnet.next
design doc from August 2017.
Implementing this specification implies the following work items:
ksonnet-lib
's ksonnet-gen
package ready for consumption as a library.
ksonnet-gen
was written to be a command line tool rather than a library, so in several places it calls (e.g.) log.Fatal
in places kubecfg
might not expect. We should remove these and return errors on failure instead.ksonnet-gen
will add the SHA of the git revision that the Kubernetes codebase was at when we build from a swagger.json
file. In this setting, this is not necessary, and most OpenAPI files won't be in a git repository anyway.govendor
'ing a dependency on ksonnet-lib
's ksonnet-gen
package. (govendor
allows you to vendor only the parts of the library you depend on.)init
to cmd/
. This involves:
init
flag if presentksonnet-lib
code for the default environment, inside the vendor/
directory.Running diff
often results in a bunch of system fields being reported, like this:
This is useful sometimes, but often you want to be able to quickly assess the changes to the fields the user has control out of. It seems like it might be sensible to default to not reporting system fields, and also have a flag like --include-system-fields
or something to specifically opt into the behavior.
env
commands.
spec.json
files.Future:
spec.json
from KUBECONFIG
.Some of the more speculative questions:
apply
default to using the cluster in kubeconfig
? What should we do when we apply <env-name>
and env-name
doesn't have a URL associated with it? (Seems like we should error out and suggest they run some command to "adopt" the current context in the currently-active kubeconfig file. Also whenever we run apply, we should probably say exactly what we're deploying to and where we got the data.)Subset the objects that are output based on an (optional) label selector argument.
The ksonnet prototype
command is specified in the ksonnet.next
design doc from August 2017.
Implementing this specification implies the following work items:
ksonnet prototype use
. This involves:
use
command.ksonnet prototype describe
. This roughly involves:
ksonnet prototype search
.Build/test/release successfully on windows.
I've published my brew formulae for kubecfg and ksonnetlib; https://github.com/GauntletWizard/homebrew-kubecfg
Install instructions would be:
brew tap GauntletWizard/kubecfg #or ksonnet/kubecfg
brew install kubecfg ksonnetlib
Add the following to your .bash_profile
:
export KUBECFG_JPATH="/usr/local/opt/ksonnetlib/share/ksonnet-lib"
Feel free to fork the repo and update it. Updating for a new release should be as simple as updating the tag and revision sha in the appropriate formulae.
i.e. 1 means diff found, 0 when no diff.
Desired workflow:
kubecfg update
(should create new objects)kubecfg update
(should remove "stale" objects)Suggested implementation:
One possible optimisation is to only look in Namespaces that were mentioned in the update, but this is insufficient in the general case (will leak namespaces that are removed from config).
Must have a flag to disable it. Current proposal is to enable it by default.
$ echo $GOPATH
/Users/tuna/workspace/gocode/ksonnet
$ export PATH=$PATH:$GOPATH/bin
$ go get github.com/ksonnet/kubecfg
$ export KUBECFG_JPATH=/Users/tuna/workspace/gocode/ksonnet/src/github.com/ksonnet/ksonnet-lib
$ kubecfg show -o yaml kubeless.jsonnet
Killed: 9
What did I do wrong ?
It is ASL licensed but none of the Go source code has the ASL v2 license header. Bottom line, the license is not properly applied...
Technically we need to add it to every file
Implement a diff
subcommand that shows the differences between what exists on the server, and what exists in config. Output is intended to be human (not machine) readable.
I'm using GKE, and as such my k8s credentials are like this:
- name: foo
user:
auth-provider:
config:
access-token: REDACTED
cmd-args: config config-helper --format=json
cmd-path: /usr/local/bin/gcloud
expiry: 2017-06-29T16:17:45Z
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
When I run kubecfg update ...
I get the error Error: Get https://xxx.xxx.xxx.xxx/apis/apps/v1beta1: error executing access token command "/usr/local/bin/gcloud ": exit status 2
. If I run kubectl get pods
it works, and subsequent kubecfg update
commands work, until the token expires I guess.
kubecfg check
should perform as many validations as we can without sending the objects to the API server.
Specifically:
Note: Unlike kubectl, swagger spec should (optionally?) be able to be read from static files on disk (ie: with no access to API server at runtime)
Include a (statically compiled) osx binary in standard release process (ie: travis-ci)
With below jsonnet for a simple clusterrole:
$ cat kubecfg-clusterrole-foo.jsonnet
local k = import "ksonnet.beta.1/k.libsonnet";
local objectMeta = k.core.v1.objectMeta;
local controller_roles = [{
apiGroups: ['core'],
resources: ['pods'],
verbs: ['list'],
}];
local clusterRole(name, rules) = {
apiVersion: "rbac.authorization.k8s.io/v1beta1",
kind: "ClusterRole",
metadata: objectMeta.name(name),
rules: rules,
};
local controllerClusterRole = clusterRole("foo", controller_roles);
{
controllerClusterRole: controllerClusterRole,
}
, doing update+delete fails with:
$ kubecfg -v=1 update kubecfg-clusterrole-foo.jsonnet
INFO Updating ClusterRole/foo
INFO Creating non-existent ClusterRole/foo
$ kubecfg -v=1 delete kubecfg-clusterrole-foo.jsonnet
INFO Deleting ClusterRole/foo
FATAL Error deleting ClusterRole/foo: "" is invalid: []: Invalid value: v1.DeleteOptions{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, GracePeriodSeconds:(*int64)(nil), Preconditions:(*v1.Preconditions)(nil), OrphanDependents:(*bool)(0xc426cd5aee), PropagationPolicy:(*v1.DeletionPropagation)(0xc426cd5af0)}: OrphanDependents and DeletionPropagation cannot be both set
# while kubectl CLI works ok:
$ kubectl delete clusterrole foo
clusterrole "foo" deleted
FYI relevant versions (kubecfg built from HEAD as now, 2017-07-25 19:30 UTC):
$ kubecfg version
kubecfg version: (dev build)
jsonnet version: v0.9.4
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"dirty", BuildDate:"2017-06-22T04:31:09Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl api-versions
apps/v1beta1
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2alpha1
batch/v1
batch/v2alpha1
certificates.k8s.io/v1beta1
extensions/v1beta1
k8s.io/v1
policy/v1beta1
rbac.authorization.k8s.io/v1alpha1
rbac.authorization.k8s.io/v1beta1
settings.k8s.io/v1alpha1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
FYI -v=3 run:
$ kubecfg -v=3 delete kubecfg-clusterrole-foo.jsonnet
DEBUG Adding jsonnet search path /home/jjo/work/src/github.com/ksonnet/ksonnet-lib
DEBUG jsonnet result is: {
"controllerClusterRole": {
"apiVersion": "rbac.authorization.k8s.io/v1beta1",
"kind": "ClusterRole",
"metadata": {
"name": "foo"
},
"rules": [
{
"apiGroups": [
"core"
],
"resources": [
"pods"
],
"verbs": [
"list"
]
}
]
}
}
INFO Deleting ClusterRole/foo
DEBUG Chose API 'clusterroles' for rbac.authorization.k8s.io/v1beta1, Kind=ClusterRole
DEBUG Fetching client for &APIResource{Name:clusterroles,Namespaced:false,Kind:ClusterRole,Verbs:[create delete deletecollection get list patch update watch],ShortNames:[],} namespace=default
FATAL Error deleting ClusterRole/foo: "" is invalid: []: Invalid value: v1.DeleteOptions{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, GracePeriodSeconds:(*int64)(nil), Preconditions:(*v1.Preconditions)(nil), OrphanDependents:(*bool)(0xc42d1b13ee), PropagationPolicy:(*v1.DeletionPropagation)(0xc42d1b13f0)}: OrphanDependents and DeletionPropagation cannot be both set
When an object without a metadata.name
appears, this branch is evaluated, causing the following error:
DEBUG Fetching %!(EXTRA string=Namespace/)
DEBUG Chose API 'namespaces' for /v1, Kind=Namespace
DEBUG Fetching client for &APIResource{Name:namespaces,Namespaced:false,Kind:Namespace,} namespace=elasticsearch
FATAL Error fetching Namespace/: resource name may not be empty
I'm not entirely sure what object caused this, and it seems like undesirable behavior, but if it wasn't, then it seems like we should consider improving the error message here.
kubecfg should add the kubernetes.io/change-cause
annotation to something meaningful on updated objects, so tools like kubectl rollout history
provide useful output.
NB: This might need to be combined with some form of "no-op" detection to prevent spamming the history with updates where the only change is updating the change-cause annotation.
The community seems to agree that kubecfg
is confusing next to kubeconfig
files. It is worth considering what the name should be. The design doc calls the tool ksonnet
which does seem to minimize confusion.
travis/osx auto tests fail about 4/5 times with a fatal: morestack on g0
stacktrace from jsonnet_cgo code.
I would love to be able to use jsonnet and kubecfg to build config maps that themselves contain YAML, which I would like to be native jsonnet object thats I can manipulate. The concrete examples is embedding Prometheus config in a config map, and manipulating that config based on the environment.
To this end, I wrote a function that serialises an object to YAML (see https://github.com/tomwilkie/kubecfg/tree/unparse-yaml), but I hit the restriction "Native extensions can only take primitives." (https://github.com/google/jsonnet/blob/master/core/vm.cpp#L2157).
Is this something worth pursuing? If so, I guess I'd need to make a PR into jsonnet to add a std.manifestYaml(v)
like the existing manifest functions?
@hausdorff and I have noticed that this tests fails occasionally.
go test -ldflags="-X main.version=build-270159619 -linkmode external -extldflags=-static" ./cmd/... ./utils/... ./pkg/... ./metadata/...
--- FAIL: TestShow (0.24s)
show_test.go:34: Running args [show -J ../testdata/lib -o yaml -f ../testdata/test.jsonnet -V aVar=aVal -V anVar --ext-str-file filevar=../testdata/extvar.file]
show_test.go:90: output is ---
apiVersion: v0alpha1
array:
- one
- 2
- - 3
bool: true
filevar: |
foo
kind: TestObject
nil: null
notAVal: aVal
notAnotherVal: aVal2
number: 42
object:
foo: bar
string: bar
show_test.go:34: Running args [show -J ../testdata/lib -o json -f ../testdata/test.jsonnet -V aVar=aVal -V anVar --ext-str-file filevar=../testdata/extvar.file]
show_test.go:90: output is ---
{
"apiVersion": "v0alpha1",
"array": [
"one",
2,
[
3
]
],
"bool": true,
"filevar": "foo\n",
"kind": "TestObject",
"nil": null,
"notAVal": "aVal",
"notAnotherVal": "aVal2",
"number": 42,
"object": {
"foo": "bar"
},
"string": "bar"
}
---
{
"apiVersion": "v0alpha1",
"array": [
"one",
2,
[
3
]
],
"bool": true,
"filevar": "foo\n",
"kind": "TestObject",
"nil": null,
"notAVal": "aVal",
"notAnotherVal": "aVal2",
"number": 42,
"object": {
"foo": "bar"
},
"string": "bar"
}
show_test.go:93: error parsing output of format json: invalid character '-' in numeric literal
FAIL
FAIL github.com/ksonnet/kubecfg/cmd 0.350s
ok github.com/ksonnet/kubecfg/utils 0.726s
ok github.com/ksonnet/kubecfg/pkg/kubecfg 0.115s
ok github.com/ksonnet/kubecfg/metadata 0.018s
make: *** [gotest] Error 1
Sample file:
local k = import "ksonnet.beta.1/k.libsonnet";
local encode64(data) = {[x]: std.base64(data[x]) for x in std.objectFields(data)};
local secret = k.core.v1.secret {
// data(data):: {data+: data},
};
encode64({foo: "barbaz"})
Expected output:
$ jsonnet secret.jsonnet
{
"foo": "YmFyYmF6"
}
Actual output:
$ kubecfg show secret.jsonnet
Error: Error reading secret.jsonnet: Unexpected object structure: string
Following up from PR #100, which shows a failure of the integration tests, I believe that the -n
flag is failing to be handled correctly. I was able to reproduce locally. Observe the following example, where I attempt to use update -n
to put a configMap
with the name testcm
into a namespace, but when I run get
we see that it shows up in the namespace specified in the current context:
$ ../kubecfg update -vv -n updatewtlhj -f components/kubecfg-cm.yaml
INFO Updating configmaps testcf
DEBUG Chose API 'configmaps' for /v1, Kind=ConfigMap
DEBUG Fetching client for &APIResource{Name:configmaps,Namespaced:true,Kind:ConfigMap,Verbs:[create delete deletecollection get list patch update watch],ShortNames:[cm],} namespace=kubecfgtest
DEBUG Patch(testcf) returned (&{map[kind:ConfigMap apiVersion:v1 metadata:map[resourceVersion:3471 creationTimestamp:2017-09-02T22:30:41Z name:testcf namespace:kubecfgtest selfLink:/api/v1/namespaces/kubecfgtest/configmaps/testcf uid:5a271989-902e-11e7-875d-06e36f41fda2]]}, <nil>)
DEBUG Updated object: {"apiVersion":"v1","
A: data":{},"kind":"ConfigMap","metadata":{"name":"testcf"}}
B: kind":"ConfigMap","metadata":{"creationTimestamp":"2017-09-02T22:30:41Z","name":"testcf","namespace":"kubecfgtest","resourceVersion":"3471","selfLink":"/api/v1/namespaces/kubecfgtest/configmaps/testcf","uid":"5a271989-902e-11e7-875d-06e36f41fda2"}}
$ k get cm -n updatewtlhj
No resources found.
$ k get cm --all-namespaces
NAMESPACE NAME DATA AGE
kube-public cluster-info 2 1h
kube-system calico-config 3 1h
kube-system extension-apiserver-authentication 6 1h
kube-system kube-proxy 1 1h
kubecfgtest nons 0 44m
kubecfgtest testcf 0 1h
kubecfgtest testcm 1 44m
When I use kubecfg diff
I tend to see a wall of red, representing default values missing from my local config but returned by the API server. To work around this in weaveworks/kubediff I only check the local config is a subset of the config returned by the API server - ie we ignore fields which don't exist in the local config.
If this something we can do for kubecfg?
Currently we rope in the default client-go flags into RootCmd
. This causes every subcommand to always have all the default flags, even if they're not useful or necessary to the command.
We should instead set these flags only on commands that require them.
The behavior of the ksonnet update
command is specified in the ksonnet.next
design doc from August 2017.
Bringing the command to specification implies the following work items:
apply
is clearer, so it seems like this is probably the leading contender.
update
to apply
. Deprecate update
. Keep for two release cycles.update
to:
components/
directory, if the --env
flag is passed.--file
flag is passed.Current when I try and run this in a pod I get:
invalid configuration: default cluster has no server defined
Add a function that is able to look up docker registries to map name:tag to name:sha at jsonnet-eval time.
Needs to be a jsonnet function so it can still be used in "hidden" places like json-serialised annotations.
Must be able to be disabled (reduced to identity function) via a command line flag, for cases where we don't want to do network lookups.
At this stage I think this should be disabled by default.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.