Code Monkey home page Code Monkey logo

kapp's People

Contributors

100mik avatar aaronshurley avatar acosta11 avatar cari-lynn avatar cppforlife avatar danielhelfand avatar dennisdenuto avatar dependabot[bot] avatar dsyer avatar elco avatar everettraven avatar ewrenn8 avatar gcheadle-vmware avatar joaopapereira avatar joe-kimmel-vmw avatar khan-ajamal avatar kumaritanushree avatar lirsacc avatar marekm71 avatar neil-hickey avatar praveenrewar avatar rcmadhankumar avatar renuy avatar rohitagg2020 avatar scothis avatar sethiyash avatar shajithamohammed avatar tmshort avatar vicmarbev avatar yujunz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kapp's Issues

cmd line flag configuration through files

Excited about that tool!

We realize necessity for different application flags through different pipelines. Our CI tool would need to distinct where it runs and adjust the command line options.
Since this project leverages cobra it appears to make much sense to add viper to support a .kappconfig file to configure per repo.
Thanks!

kapp error when svcat server is not available

I'm getting this error when using kapp on a cluster with a bad svcat instance:

Error: unable to retrieve the complete list of server APIs: servicecatalog.k8s.io/v1beta1: the server is currently unable to handle the request```

Maybe `kapp` should ignore this? 

kapp service-account null pointer exception

When first install kapp for the very first time and running against docker-for-mac get the following error

 kapp service-account
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1bcdc87]

goroutine 1 [running]:
github.com/k14s/kapp/pkg/kapp/cmd.NewKappCmd.func1.1(0xc000429400, 0x2991748, 0x0, 0x0, 0x0, 0x0)
	/Users/pivotal/workspace/ytt-go/src/github.com/k14s/kapp/pkg/kapp/cmd/kapp.go:120 +0xc7
github.com/k14s/kapp/vendor/github.com/cppforlife/cobrautil.WrapRunEForCmd.func1.1(0xc000429400, 0x2991748, 0x0, 0x0, 0x0, 0x0)
	/Users/pivotal/workspace/ytt-go/src/github.com/k14s/kapp/vendor/github.com/cppforlife/cobrautil/misc.go:25 +0xaf
github.com/k14s/kapp/vendor/github.com/cppforlife/cobrautil.WrapRunEForCmd.func1.1(0xc000429400, 0x2991748, 0x0, 0x0, 0x0, 0x0)
	/Users/pivotal/workspace/ytt-go/src/github.com/k14s/kapp/vendor/github.com/cppforlife/cobrautil/misc.go:25 +0xaf
github.com/k14s/kapp/vendor/github.com/spf13/cobra.(*Command).execute(0xc000429400, 0x2991748, 0x0, 0x0, 0xc000429400, 0x2991748)
	/Users/pivotal/workspace/ytt-go/src/github.com/k14s/kapp/vendor/github.com/spf13/cobra/command.go:762 +0x460
github.com/k14s/kapp/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0003fc500, 0xc0003fc500, 0xc000336e40, 0x2917b20)
	/Users/pivotal/workspace/ytt-go/src/github.com/k14s/kapp/vendor/github.com/spf13/cobra/command.go:852 +0x2ea
github.com/k14s/kapp/vendor/github.com/spf13/cobra.(*Command).Execute(...)
	/Users/pivotal/workspace/ytt-go/src/github.com/k14s/kapp/vendor/github.com/spf13/cobra/command.go:800
main.main()
	/Users/pivotal/workspace/ytt-go/src/github.com/k14s/kapp/cmd/kapp/kapp.go:26 +0x16f

i haven't created any service accounts so assume its a simple null pointer error

error if resources do not have apiversion, kind, and metadata.name

Reported by @hfjn on slack. it appears that if resources doesnt have a name, it confuses dynamic package from kubernetes/client-go library, which ultimately results in a server returning list of resources.

panic: interface conversion: runtime.Object is *unstructured.UnstructuredList, not *unstructured.Unstructured

goroutine 397 [running]:
github.com/k14s/kapp/vendor/k8s.io/client-go/dynamic.(*dynamicResourceClient).Get(0xc000162910, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
       /Users/argonaut/workspace/k14s-go/src/github.com/k14s/kapp/vendor/k8s.io/client-go/dynamic/simple.go:197 +0x925
github.com/k14s/kapp/pkg/kapp/resources.Resources.Exists.func2(0x1, 0xc0006d3ac0, 0x100e108)
       /Users/argonaut/workspace/k14s-go/src/github.com/k14s/kapp/pkg/kapp/resources/resources.go:278 +0xf7
github.com/k14s/kapp/pkg/kapp/util.Retry.func1(0xc0006d3ae0, 0x14a8e4d, 0x1dd22a0)
       /Users/argonaut/workspace/k14s-go/src/github.com/k14s/kapp/pkg/kapp/util/retry.go:17 +0x38
github.com/k14s/kapp/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0004049a0, 0xc000991b60, 0xc0004049a0, 0xc00093cb74)
       /Users/argonaut/workspace/k14s-go/src/github.com/k14s/kapp/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:245 +0x2b
github.com/k14s/kapp/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x3b9aca00, 0xdf8475800, 0xc0006d3b60, 0xc000516f00, 0x8)
       /Users/argonaut/workspace/k14s-go/src/github.com/k14s/kapp/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:241 +0x4d
github.com/k14s/kapp/pkg/kapp/util.Retry(0x3b9aca00, 0xdf8475800, 0xc0006d3cd8, 0xc0003e7860, 0x207b860)
       /Users/argonaut/workspace/k14s-go/src/github.com/k14s/kapp/pkg/kapp/util/retry.go:16 +0x8b
github.com/k14s/kapp/pkg/kapp/resources.Resources.Exists(0x208a0c0, 0xc0003c48d0, 0x20df800, 0xc0003e7860, 0x207b860, 0xc000164570, 0x20d5dc0, 0xc000426540, 0xc000517e00, 0x8, ...)
       /Users/argonaut/workspace/k14s-go/src/github.com/k14s/kapp/pkg/kapp/resources/resources.go:277 +0x217
github.com/k14s/kapp/pkg/kapp/resources.IdentifiedResources.Exists(...)
       /Users/argonaut/workspace/k14s-go/src/github.com/k14s/kapp/pkg/kapp/resources/identified_resources.go:91
github.com/k14s/kapp/pkg/kapp/resources.(*LabeledResources).findNonLabeledResources.func1(0xc0000c7650, 0xc00019a000, 0x20d5dc0, 0xc000426540, 0xc00001a420, 0xc00001a480)
       /Users/argonaut/workspace/k14s-go/src/github.com/k14s/kapp/pkg/kapp/resources/labeled_resources.go:163 +0xe1
created by github.com/k14s/kapp/pkg/kapp/resources.(*LabeledResources).findNonLabeledResources
       /Users/argonaut/workspace/k14s-go/src/github.com/k14s/kapp/pkg/kapp/resources/labeled_resources.go:160 +0x4e8

condense waiting output during kapp deploy/delete

@DTTerastar suggested in slack that deploy can output quite a bit of logs, hence it would be nice to condense them in some way. Possible options:

  • reduce amount of duplicated updates (if nothing has changed, do not print anything)
  • ability to change check interval (every 15s, instead of every 1s)

"Ownership errors" messages with label values are confusing

I just had an ownership problem caused by trying to deploy the same resources under two different app names. I received the following error:
Error: Ownership errors:
Resource 'service/some-service (v1) namespace: demo' is associated with a different label value 'kapp.k14s.io/app=1566811645817612751'.
It would be more intuitive to give the app name in the error message instead of the label value.

kapp doesn't support colon separated KUBECONFIG

I use kubectl (and kubecfg) with a KUBECONFIG that looks like this:

/Users/lbriggs/.kube/config:/Users/lbriggs/.kube/config.d/dev-config.yml:/Users/lbriggs/.kube/config.d/prod-config.yml

When using kapp I get an error:

Error: Building Kubernetes config: stat /Users/lbriggs/.kube/config:/Users/lbriggs/.kube/config.d/dev-config.yml:/Users/lbriggs/.kube/config.d/prod-config.yml: no such file or directory

Resources under the single app limit issue

I use kapp to manage a set of deployments. Under the single application, I deploy about ~230 resources (generated). On some point in time, deployment started taking a long time, after adding more resources is stopped working at all. Hangs for a couple of minutes then I get such error when I run it locally:

Error: Listing schema.GroupVersionResource{Group:"apps", Version:"v1", Resource:"replicasets"}, namespaced: true: Stream error http2.StreamError{StreamID:0x10b, Code:0x2, Cause:error(nil)} when reading response body, may be caused by closed connection. Please retry.

When I run it closer to target kubernetes cluster (in the same AWS network) - it works better (fails less often).

$ kapp version
Client Version: 0.11.0

Succeeded

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-19T13:57:45Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.6-eks-5047ed", GitCommit:"5047edce664593832e9b889e447ac75ab104f527", GitTreeState:"clean", BuildDate:"2019-08-21T22:32:40Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

Allow (re)deployment of resources that already have a kapp label

After messing up one of my kapp deployments, the kapp ConfigMap was lost. Redeploying the app generates a new label (id) but refuses to deployed because existing resources already have conflicting label.

After a small chat with @cppforlife he showed me :

  • kapp inspect -a label:kapp.k14s.io/app=... to see what resources are associated
  • kapp delete -a label:kapp.k14s.io/app=... to delete all of them

It would be great to also have the option to kapp deploy with some kind of overwrite option to ignore the existing labels and just takeover the resources that are already there.

automatically show logs for pods that are not progressing during deploy

example:

knative-v0.4.0 requires particular kubernetes version, hence some pods will log and fail

pod/controller-f84547646-gqs69 logs:

...
{"level":"fatal","ts":"2019-03-19T00:25:40.784Z","logger":"controller","caller":"controller/main.go:118","msg":"Version check failed: kubernetes version \"v1.10.11-gke.1\" is not compatible, need at least \"v1.11.0\"","stacktrace":"main.main\n\t/go/src/github.com/knative/serving/cmd/controller/main.go:118\nruntime.main\n\t/root/sdk/go1.12rc1/src/runtime/proc.go:200"}

progress log:

5:26:10PM: waiting on update deployment/controller (apps/v1) namespace: knative-serving
5:26:11PM:  L waiting on replicaset/controller-f84547646 (extensions/v1beta1) namespace: knative-serving ... done
5:26:11PM:  L waiting on replicaset/controller-cf84485f7 (extensions/v1beta1) namespace: knative-serving ... done
5:26:11PM:  L waiting on replicaset/controller-5b569b8fb9 (extensions/v1beta1) namespace: knative-serving ... done
5:26:11PM:  L waiting on pod/controller-f84547646-gqs69 (v1) namespace: knative-serving ... in progress: Condition Ready is not True (False)

pod/controller-f84547646-gqs69 status:

status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2019-03-19T00:19:53Z
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: 2019-03-19T00:22:54Z
    message: 'containers with unready status: [controller]'
    reason: ContainersNotReady
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: 2019-03-19T00:19:53Z
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://b87c0cf02ff2b04f9a3dac7c74c0ef04ae5fa8eddcec3c709209e5215dd20037
    image: sha256:ce2543ec455281f6904cbbf3304f840a619c0adaefc4bfa81964a1fadbfcaf13
    imageID: docker-pullable://gcr.io/knative-releases/github.com/knative/serving/cmd/controller@sha256:c9987a7b21400bd3afa01eeed54f390a7d1d24b25d87219803cdfb294a969379
    lastState:
      terminated:
        containerID: docker://b87c0cf02ff2b04f9a3dac7c74c0ef04ae5fa8eddcec3c709209e5215dd20037
        exitCode: 1
        finishedAt: 2019-03-19T00:25:40Z
        reason: Error
        startedAt: 2019-03-19T00:25:40Z
    name: controller
    ready: false
    restartCount: 6
    state:
      waiting:
        message: Back-off 5m0s restarting failed container=controller pod=controller-f84547646-gqs69_knative-serving(b7dc08dc-49dc-11e9-9fb1-42010a8001ed)
        reason: CrashLoopBackOff
  hostIP: 10.128.0.17
  phase: Running
  podIP: 10.20.3.180
  qosClass: Burstable
  startTime: 2019-03-19T00:19:53Z

gke auth config-helper is hitting against too many open files

@andyshinn reported following error during kapp deploy.

Error: Listing schema.GroupVersionResource{Group:"certificates.k8s.io", Version:"v1beta1", Resource:"certificatesigningrequests"}, namespaced: false: Get https://x.x.x.x/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?labelSelector=kapp.k14s.io%2Fapp%3D1565298894854176000: error executing access token command "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/bin/gcloud config config-helper --format=json": err=pipe: too many open files output= stderr=

it appears that number of allowed open fds is exceeded due to gke auth.

kapp deploy --yes w/ tty redirected turns off progress notification by default

kapp when run in a ci environment (tty redirected) it by default turns off all status output about the convergence of the deployment. --tty fixes this, but the output without --tty is misleading to the point of making a user not realize the deploy was undertaken and failed.

The only output without --tty is:
Error: timed out waiting after 15m0s

Kapp does not replace name of versioned secret volume

test_secret.yaml

---
apiVersion: v1
kind: ConfigMap
metadata:
  annotations:
    kapp.k14s.io/versioned: ""
  name: gcp-cmap
  labels:
    app: myapp
data:
  test: data

---

apiVersion: v1
kind: Secret
metadata:
  annotations:
    kapp.k14s.io/versioned: ""
  name: gcp-service-accounts
  labels:
    app: myapp
type: Opaque
data:
  gcs_service_account: asdf
  bq_service_account: asdf
  logging_service_account: asdf

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-deployment
  labels:
    app: myapp
    component: web
spec:
  selector:
    matchLabels:
      app: myapp
      component: web
  template:
    metadata:
      labels:
        app: myapp
        component: web
    spec:
      containers:
        - name: web
          image: nginx
          volumeMounts:
            - name: gcp-service-accounts
              mountPath: /usr/local/gcp
      volumes:
        - name: gcp-service-accounts
          secret:
            secretName: gcp-service-accounts
            defaultMode: 0755
        - name: gcp-config
          configMap:
            name: gcp-cmap
> kapp deploy --diff-run -f test_secret.yaml --app myapp -c
--- create configmap/gcp-cmap-ver-1 (v1) namespace: default
      0 + apiVersion: v1
      1 + data:
      2 +   test: data
      3 + kind: ConfigMap
      4 + metadata:
      5 +   annotations:
      6 +     kapp.k14s.io/versioned: ""
      7 +   labels:
      8 +     app: myapp
      9 +     kapp.k14s.io/app: "1574681798557524929"
     10 +     kapp.k14s.io/association: v1.0d929b65bf846560f516aeb5a01fb592
     11 +   name: gcp-cmap-ver-1
     12 +   namespace: default
     13 + 
--- create secret/gcp-service-accounts-ver-1 (v1) namespace: default
      0 + apiVersion: v1
      1 + data:
      2 +   bq_service_account: asdf
      3 +   gcs_service_account: asdf
      4 +   logging_service_account: asdf
      5 + kind: Secret
      6 + metadata:
      7 +   annotations:
      8 +     kapp.k14s.io/versioned: ""
      9 +   labels:
     10 +     app: myapp
     11 +     kapp.k14s.io/app: "1574681798557524929"
     12 +     kapp.k14s.io/association: v1.7014a30afc9f703a9c41617b494b848b
     13 +   name: gcp-service-accounts-ver-1
     14 +   namespace: default
     15 + type: Opaque
     16 + 
--- create deployment/my-deployment (apps/v1) namespace: default
      0 + apiVersion: apps/v1
      1 + kind: Deployment
      2 + metadata:
      3 +   labels:
      4 +     app: myapp
      5 +     component: web
      6 +     kapp.k14s.io/app: "1574681798557524929"
      7 +     kapp.k14s.io/association: v1.00220b96d5d64fae7870b64e5ccdc062
      8 +   name: my-deployment
      9 +   namespace: default
     10 + spec:
     11 +   selector:
     12 +     matchLabels:
     13 +       app: myapp
     14 +       component: web
     15 +       kapp.k14s.io/app: "1574681798557524929"
     16 +   template:
     17 +     metadata:
     18 +       labels:
     19 +         app: myapp
     20 +         component: web
     21 +         kapp.k14s.io/app: "1574681798557524929"
     22 +         kapp.k14s.io/association: v1.00220b96d5d64fae7870b64e5ccdc062
     23 +     spec:
     24 +       containers:
     25 +       - image: nginx
     26 +         name: web
     27 +         volumeMounts:
     28 +         - mountPath: /usr/local/gcp
     29 +           name: gcp-service-accounts
     30 +       volumes:
     31 +       - name: gcp-service-accounts
     32 +         secret:
     33 +           defaultMode: 493
     34 +           secretName: gcp-service-accounts
     35 +       - configMap:
     36 +           name: gcp-cmap-ver-1
     37 +         name: gcp-config
     38 + 

Changes

Namespace  Name                        Kind        Conds.  Age  Op      Wait to    Rs  Ri  
default    gcp-cmap-ver-1              ConfigMap   -       -    create  reconcile  -   -  
^          gcp-service-accounts-ver-1  Secret      -       -    create  reconcile  -   -  
^          my-deployment               Deployment  -       -    create  reconcile  -   -  

Op:      3 create, 0 delete, 0 update, 0 noop
Wait to: 3 reconcile, 0 delete, 0 noop

Succeeded

> kapp version                                              
Client Version: 0.15.0

Succeeded

When using versioned configmaps in volumes their references get updated with the current version name. However when we do the same with secrets, that is not the case.

Add rollout success/fail conditions

Please add a way to specify when the rollout should be considered finish.
My deploys have more than 300 pods each, so I don't want to wait for all the pods to be ready in every pipeline execution.

Maybe allows to specify a percentage from 0 to 100 or a more complex condition via annotations on the resource.

Goreleaser?

Would you accept a PR for using goreleaser to build the binaries?

Wait for CRDs existence at runtime

kapp cannot deploy Gatekeeper with a Gatekeeper configuration because the configuration uses a CRD created by the Gatekeeper controller itself.

For instance, in https://github.com/open-policy-agent/gatekeeper#constraint-templates, a template is created. This template is used by the Gatekeeper controller to create the CRD K8sRequiredLabels. This kind can be used by the user to create constraints such as https://github.com/open-policy-agent/gatekeeper#constraints.

So, currently, we first need to deploy the Gatekeeper controller, and then add the configuration custom resource in the app. Otherwise, kapp refuses to deploy the app because the CRD K8sRequiredLabels is missing.

In this kind of scenario, I think kapp should be able to wait for CRDs at runtime. This could be specified as a annotation in the custom resource metadata.

WDYT?

provide a way to group resources so that they are applied in order

Example use case:

Deploy set of resources (eg configmap)
Deploy a job, wait for the job to complete (eg database migration)
Deploy another set of resources (eg service, ingress, deployment)
Wait for deployment to finish
Deploy another job and wait to finish (eg some type of script the app needs to run after a deployment)
Once everything is done, the program ends and my CI/CD pipeline is completed

from @bmaynard (https://kubernetes.slack.com/archives/CH8KCCKA5/p1561584027265500)

provide more intelligent waiting strategy for resources that have associated failed pods

example: if there are Jobs/CronJobs that have failed jobs, currently kapp will indicate that error has occurred as it tries to wait for the system to converge. this behaviour is useful for Deployments for example because pods are actively being cycled, but in other cases it may not be helpful. decide what to do and come up with a generic way to handle this...

Unable to deploy large CRD due to annotation max size limit (262144 characters)

Issue created after Slack chat: https://kubernetes.slack.com/archives/CH8KCCKA5/p1573575958163800

I'm unable to deploy a certain CRD because of the size of the kapp.k14s.io/original annotation:

Error: Applying update customresourcedefinition/alertmanagers.monitoring.coreos.com (apiextensions.k8s.io/v1beta1) cluster: Saving record of last applied resource: Updating resource customresourcedefinition/alertmanagers.monitoring.coreos.com (apiextensions.k8s.io/v1beta1) cluster: CustomResourceDefinition.apiextensions.k8s.io "alertmanagers.monitoring.coreos.com" is invalid: metadata.annotations: Too long: must have at most 262144 characters (reason: Invalid)

kubectl apply works because of the JSON encoding of the annotation it seems (added benefit: doesn't clutter kubectl describe output):

Annotations:  kapp.k14s.io/identity: v1;/apiextensions.k8s.io/CustomResourceDefinition/alertmanagers.monitoring.coreos.com;apiextensions.k8s.io/v1beta1
              kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"apiextensions.k8s.io/v1beta1","kind":"CustomResourceDefinition","metadata":{"annotations":{},"creationTimestamp":null,"name...

support applying separately constructred diff (similar to terraform plan/apply)

Similar to terrafrom plan/apply it would be super useful if the diff and apply stages could be split into two invocations of the tool. This would allow better integration with systems where user input can't be provided via a TTY (e.g. Buildkite pipelines). Currently the only solution in these cases is to use -y, but at least initially this isn't what I'd like to do when deploying production services.

Support versions field for CRDs

kapp version: 0.12.0

When I recently tried to deploy cert-manager v0.10.0 along with a ClusterIssuer, kapp gave this error: Error: Expected to find kind 'certmanager.k8s.io/v1alpha1/ClusterIssuer', but did not.

With cert-manager v0.9.1 this works - I can deploy a ClusterIssuer alongside cert-manager in one go.

If I deploy cert-manager v0.10.0 and the ClusterIssuer CR independently it works.

Digging into what changed, I discovered that the CustomResourceDefinition for ClusterIssuer changed from using the (deprecated) version field to a versions list with a single entry (documentation for version vs versions.

Difficulty running e2e tests

I keep running into an error running the end to end tests on the develop branch with the following failure:

==> deploy initial
Running 'kapp deploy -f - -a test-template --diff-changes --tty -n kapp-test --into-ns kapp-test --yes'...
Running 'kapp delete -a test-template -n kapp-test --yes'...
--- FAIL: TestTemplate (610.06s)
    kapp.go:83: Failed to successfully execute 'kapp deploy -f - -a test-template --diff-changes --tty -n kapp-test --into-ns kapp-test --yes': Execution error: stdout: '--- create configmap/config-ver-1 (v1) namespace: kapp-test
              0 + apiVersion: v1
              1 + data:
              2 +   key1: val1
              3 + kind: ConfigMap
              4 + metadata:
              5 +   annotations:
              6 +     kapp.k14s.io/versioned: ""
              7 +   labels:
              8 +     kapp.k14s.io/app: "1574793934192273000"
              9 +     kapp.k14s.io/association: v1.62442fcb57d209abe192460ff4403c7c
             10 +   name: config-ver-1
             11 +   namespace: kapp-test
             12 +
        --- create secret/secret-ver-1 (v1) namespace: kapp-test
              0 + apiVersion: v1
              1 + data:
              2 +   key1: val1
              3 + kind: Secret
              4 + metadata:
              5 +   annotations:
              6 +     kapp.k14s.io/versioned: ""
              7 +   labels:
              8 +     kapp.k14s.io/app: "1574793934192273000"
              9 +     kapp.k14s.io/association: v1.8275953a67aea7cbb147343af1608f5b
             10 +   name: secret-ver-1
             11 +   namespace: kapp-test
             12 +
        --- create deployment/dep (apps/v1) namespace: kapp-test
              0 + apiVersion: apps/v1
              1 + kind: Deployment
              2 + metadata:
              3 +   labels:
              4 +     kapp.k14s.io/app: "1574793934192273000"
              5 +     kapp.k14s.io/association: v1.6f33d83389b69bd6ab35a82aa23e12fc
              6 +   name: dep
              7 +   namespace: kapp-test
              8 + spec:
              9 +   replicas: 1
             10 +   selector:
             11 +     matchLabels:
             12 +       app: dep
             13 +       kapp.k14s.io/app: "1574793934192273000"
             14 +   template:
             15 +     metadata:
             16 +       labels:
             17 +         app: dep
             18 +         kapp.k14s.io/app: "1574793934192273000"
             19 +         kapp.k14s.io/association: v1.6f33d83389b69bd6ab35a82aa23e12fc
             20 +     spec:
             21 +       containers:
             22 +       - args:
             23 +         - -listen=:80
             24 +         - -text=hello
             25 +         envFrom:
             26 +         - configMapRef:
             27 +             name: config-ver-1
             28 +         image: hashicorp/http-echo
             29 +         name: echo
             30 +         ports:
             31 +         - containerPort: 80
             32 +       volumes:
             33 +       - name: vol1
             34 +         secret:
             35 +           secretName: secret
             36 +

        Changes

        Namespace  Name          Kind        Conds.  Age  Op      Wait to    Rs  Ri
        kapp-test  config-ver-1  ConfigMap   -       -    create  reconcile  -   -
        ^          dep           Deployment  -       -    create  reconcile  -   -
        ^          secret-ver-1  Secret      -       -    create  reconcile  -   -

        Op:      3 create, 0 delete, 0 update, 0 noop
        Wait to: 3 reconcile, 0 delete, 0 noop

        10:45:35AM: ---- applying 3 changes [0/3 done] ----
        10:45:35AM: create configmap/config-ver-1 (v1) namespace: kapp-test
        10:45:35AM: create secret/secret-ver-1 (v1) namespace: kapp-test
        10:45:35AM: create deployment/dep (apps/v1) namespace: kapp-test
        10:45:36AM: ---- waiting on 3 changes [0/3 done] ----
        10:45:37AM: ok: reconcile configmap/config-ver-1 (v1) namespace: kapp-test
        10:45:37AM: ok: reconcile secret/secret-ver-1 (v1) namespace: kapp-test
        10:45:38AM: ongoing: reconcile deployment/dep (apps/v1) namespace: kapp-test
        10:45:38AM:  ^ Waiting for 1 unavailable replicas
        10:45:38AM:  L ok: waiting on replicaset/dep-5658f778d6 (apps/v1) namespace: kapp-test
        10:45:38AM:  L ongoing: waiting on pod/dep-5658f778d6-mcksg (v1) namespace: kapp-test
        10:45:38AM:     ^ Pending: ContainerCreating
        10:45:38AM: ---- waiting on 1 changes [2/3 done] ----
        10:46:38AM: ongoing: reconcile deployment/dep (apps/v1) namespace: kapp-test
        10:46:38AM:  ^ Waiting for 1 unavailable replicas
        10:46:38AM:  L ok: waiting on replicaset/dep-5658f778d6 (apps/v1) namespace: kapp-test
        10:46:38AM:  L ongoing: waiting on pod/dep-5658f778d6-mcksg (v1) namespace: kapp-test
        10:46:38AM:     ^ Pending: ContainerCreating
        10:46:39AM: ---- waiting on 1 changes [2/3 done] ----
        10:47:39AM: ongoing: reconcile deployment/dep (apps/v1) namespace: kapp-test
        10:47:39AM:  ^ Waiting for 1 unavailable replicas
        10:47:39AM:  L ok: waiting on replicaset/dep-5658f778d6 (apps/v1) namespace: kapp-test
        10:47:39AM:  L ongoing: waiting on pod/dep-5658f778d6-mcksg (v1) namespace: kapp-test
        10:47:39AM:     ^ Pending: ContainerCreating
        10:47:40AM: ---- waiting on 1 changes [2/3 done] ----
        10:48:39AM: ongoing: reconcile deployment/dep (apps/v1) namespace: kapp-test
        10:48:39AM:  ^ Waiting for 1 unavailable replicas
        10:48:39AM:  L ok: waiting on replicaset/dep-5658f778d6 (apps/v1) namespace: kapp-test
        10:48:39AM:  L ongoing: waiting on pod/dep-5658f778d6-mcksg (v1) namespace: kapp-test
        10:48:39AM:     ^ Pending: ContainerCreating
        10:48:40AM: ---- waiting on 1 changes [2/3 done] ----
        10:49:40AM: ongoing: reconcile deployment/dep (apps/v1) namespace: kapp-test
        10:49:40AM:  ^ Waiting for 1 unavailable replicas
        10:49:40AM:  L ok: waiting on replicaset/dep-5658f778d6 (apps/v1) namespace: kapp-test
        10:49:40AM:  L ongoing: waiting on pod/dep-5658f778d6-mcksg (v1) namespace: kapp-test
        10:49:40AM:     ^ Pending: ContainerCreating
        10:49:41AM: ---- waiting on 1 changes [2/3 done] ----
        10:50:40AM: ongoing: reconcile deployment/dep (apps/v1) namespace: kapp-test
        10:50:40AM:  ^ Waiting for 1 unavailable replicas
        10:50:40AM:  L ok: waiting on replicaset/dep-5658f778d6 (apps/v1) namespace: kapp-test
        10:50:40AM:  L ongoing: waiting on pod/dep-5658f778d6-mcksg (v1) namespace: kapp-test
        10:50:40AM:     ^ Pending: ContainerCreating
        10:50:41AM: ---- waiting on 1 changes [2/3 done] ----
        10:51:41AM: ongoing: reconcile deployment/dep (apps/v1) namespace: kapp-test
        10:51:41AM:  ^ Waiting for 1 unavailable replicas
        10:51:41AM:  L ok: waiting on replicaset/dep-5658f778d6 (apps/v1) namespace: kapp-test
        10:51:41AM:  L ongoing: waiting on pod/dep-5658f778d6-mcksg (v1) namespace: kapp-test
        10:51:41AM:     ^ Pending: ContainerCreating
        10:51:42AM: ---- waiting on 1 changes [2/3 done] ----
        10:52:42AM: ongoing: reconcile deployment/dep (apps/v1) namespace: kapp-test
        10:52:42AM:  ^ Waiting for 1 unavailable replicas
        10:52:42AM:  L ok: waiting on replicaset/dep-5658f778d6 (apps/v1) namespace: kapp-test
        10:52:42AM:  L ongoing: waiting on pod/dep-5658f778d6-mcksg (v1) namespace: kapp-test
        10:52:42AM:     ^ Pending: ContainerCreating
        10:52:43AM: ---- waiting on 1 changes [2/3 done] ----
        10:53:43AM: ongoing: reconcile deployment/dep (apps/v1) namespace: kapp-test
        10:53:43AM:  ^ Waiting for 1 unavailable replicas
        10:53:43AM:  L ok: waiting on replicaset/dep-5658f778d6 (apps/v1) namespace: kapp-test
        10:53:43AM:  L ongoing: waiting on pod/dep-5658f778d6-mcksg (v1) namespace: kapp-test
        10:53:43AM:     ^ Pending: ContainerCreating
        10:53:44AM: ---- waiting on 1 changes [2/3 done] ----
        10:54:44AM: ongoing: reconcile deployment/dep (apps/v1) namespace: kapp-test
        10:54:44AM:  ^ Waiting for 1 unavailable replicas
        10:54:44AM:  L ok: waiting on replicaset/dep-5658f778d6 (apps/v1) namespace: kapp-test
        10:54:44AM:  L ongoing: waiting on pod/dep-5658f778d6-mcksg (v1) namespace: kapp-test
        10:54:44AM:     ^ Pending: ContainerCreating
        10:54:45AM: ---- waiting on 1 changes [2/3 done] ----
        10:55:36AM: fail: reconcile deployment/dep (apps/v1) namespace: kapp-test

        ' stderr: 'Error: waiting on reconcile deployment/dep (apps/v1) namespace: kapp-test: finished unsuccessfully (Deployment is not progressing: ProgressDeadlineExceeded (message: ReplicaSet "dep-5658f778d6" has timed out progressing.))
        ' error: 'exit status 1'

When I describe the pod that is associated with this test I see the following:

Events:
  Type     Reason       Age                From                                                     Message
  ----     ------       ----               ----                                                     -------
  Normal   Scheduled    96s                default-scheduler                                        Successfully assigned kapp-test/dep-dc9bf645-bzrsp to gke-jims-playground-default-pool-477b1489-shzn
  Warning  FailedMount  32s (x8 over 96s)  kubelet, gke-jims-playground-default-pool-477b1489-shzn  MountVolume.SetUp failed for volume "vol1" : secrets "secret" not found

Is there some test setup that is necessary before running the end to end tests?

Possible to use this is a library within terraform plugin?

Hi,

Thank you for your work. I think the tool makes things very easy. But I was wondering at the following scenario and would like your input.
In terraform you can have different providers. Right now for the official terraform kubernetes provider they do not provide all resources. You also can use something like this: https://github.com/nabancard/terraform-provider-kubernetes-yaml

That uses raw yaml. If we could use the work from kapp as a library that would make things maybe easier. Would this be a possibility you think?

support configuring Service in one app to point to another (currently label scoping injects its own app label)

Darrell Turner via Slack (https://kubernetes.slack.com/archives/CH8KCCKA5/p1559311705003900):

I'm running across an issue w/ kapp that seems to be by design but conflicts with what i want to do. I want to deploy my app as 2 seperate kapps one that does the deployment/release/pods and the other that does the services. I'm doing this so I can have blue/green type deployments w/ multiple apps, and just one set of services. Everything works awesome EXCEPT kapp is adding to my service selector something to limit itself to the same app. Since my app is installed as a seperate kapp name my selector can never match. If i manually remove the selector it all works great.

Currently default label scoping rules inject its own app label into Service's spec.selector [1]. Let's for now add an annotation to opt out of label scoping per resource.

[1] https://github.com/k14s/kapp/blob/bcd3c139498a90cfbaa805cbe7cb19349671a1df/pkg/kapp/config/default.go#L150-L153

improve diff view of long string values

it would be pretty awesome to improve how we show diff for long string values, especially when they are wrapped in the terminal. for example

--- update configmap/linkerd-config (v1) namespace: linkerd
  ...
  2,  2     global: |
  3     -     {"linkerdNamespace":"linkerd","cniEnabled":false,"version":"stable-2.3.2","identityContext":{"trustDomain":"cluster.local","trustAnchorsPem":"-----BEGIN CERTIFICATE-----\nMIIBgjCCASmgAwIBAgIBATAKBggqhkjOPQQDAjApMScwJQYDVQQDEx5pZGVudGl0\neS5saW5rZXJkLmNsdXN0ZXIubG9jYWwwHhcNMTkwNjE4MTk1NDQzWhcNMjAwNjE3\nMTk1NTAzWjApMScwJQYDVQQDEx5pZGVudGl0eS5saW5rZXJkLmNsdXN0ZXIubG9j\nYWwwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAASlDwpe0E2UvdTc+nuXnmUbHTRM\nk8ozZwGDvgYR/WVcnwnFUCHpNdt4lqaker3YZBdqXBYqN8/PW413xjfvHB8yo0Iw\nQDAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMC\nMA8GA1UdEwEB/wQFMAMBAf8wCgYIKoZIzj0EAwIDRwAwRAIgRY0aVzBiDmscwK6G\n2zVcsFSJD6bhVf/Dqfws/ljCeBECIC8bWlkBRMhebYl4tSPH/IwiNXDXVCRgO/jz\n6iLVXPjv\n-----END CERTIFICATE-----\n","issuanceLifetime":"86400s","clockSkewAllowance":"20s"},"autoInjectContext":null}
      3 +     {"linkerdNamespace":"linkerd","cniEnabled":false,"version":"stable-2.3.2","identityContext":{"trustDomain":"cluster.local","trustAnchorsPem":"-----BEGIN CERTIFICATE-----\nMIIBgjCCASmgAwIBAgIBATAKBggqhkjOPQQDAjApMScwJQYDVQQDEx5pZGVudGl0\neS5saW5rZXJkLmNsdXN0ZXIubG9jYWwwHhcNMTkwNjE4MTk1NDQzWhcNMjAwNjE3\nMTk1NTAzWjApMScwJQYDVQQDEx5pZGVudGl0eS5saW5rZXJkLmNsdXN0ZXIubG9j\nYWwwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAASlDwpe0E2UvdTc+nuXnmUbHTRM\nk8ozZwGDvgYR/WVcnwnFUCHpNdt4lqaker3YZBdqXBYqN8/PW413xjfvHB8yo0Iw\nQDAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMC\nMA8GA1UdEwEB/wQFMAMBAf8wCgYIKoZIzj0EAwIDRwAwRAIgRY0aVzBiDmscwK6G\n2zVcsFSJD6bhVf/Dqfws/ljCeBECIC8bWlkBRMhebYl4tSPH/IwiNXDXVCRgO/jz\n6iLVXPjv\n-----END CERTIFICATE-----\n","issuanceLifetime":"86400s","clockSkewAllowance":"20s"},"autoInjectContext":{}}
  4,  4     install: |
  5     -     {"uuid":"a3d53c7f-04cd-4110-85c1-2852e3d795c1","cliVersion":"stable-2.3.2","flags":[]}
      5 +     {"uuid":"a3d53c7f-04cd-4110-85c1-2852e3d795c1","cliVersion":"stable-2.3.2","flags":[{"name":"proxy-auto-inject","value":"true"}]}

kapp should work with kubernetes APIs that are affected by faulty APIService resources

Hi,

I've recently played with certmanager and mess up my environment a little bit (my namespaces refused to be deleted so I had to force the deletion). But, since then, I couldn't launch kapp anymore:

kapp deploy -n hello -a app-hello -f ./sandbox/config.app.yml                                                                  
Error: unable to retrieve the complete list of server APIs: admission.certmanager.k8s.io/v1beta1: the server is currently unable to handle the request

Delete the orphaned apiservices did the trick: kubectl delete apiservices v1beta1.admission.certmanager.k8s.io

Could kapp be more resilient here and start even in case of errors in server APIs?

Thanks
--nick

kapp logs -f should not start with all logs by default

since logs currently shows logs from the beginning, when there are a lot of logs it gets too noisy quickly. we should probably add a flag to start from beginning and by default just show last 10 lines (may be only apply to follow mode?).

kapp delete fails when not allowed to list cluster namespaces

I have an RBAC setup where each user or service account has access only to a single namespace, and is not able to list namespaces at all.

It seems like kapp has made most commands work with such clusters at be807cd, by falling back to the namespace specified by the -n flag if listing namespace fails. Still, it seems like in some cases (such as when scanning for leftover resources after deleting an app), the list of fallback namespaces is specified as nil and kapp will not fallback:
https://github.com/k14s/kapp/blob/b53b922fc165869232c8fe95ce62d623e3b1336d/pkg/kapp/app/labeled_app.go#L45

This results with all resources being successfully deleted, but the delete command still failing because it cannot get a list of all namespace.

provide a way to add custom waiting functionality for resource types (or specific resources)

to support advanced waiting behaviour, possible implementations:

  • support an annotation that includes a starlark snippet (may be applies to specific resource vs all resource kinds); starlark code will have access to resource and associated resources to check on done-ness

  • support an annotation that indicates resource is done and successful; annotation could be populated by controllers running in a cluster

kapp does not allow placing app resource into namespace resource belonging to the app

Relevant slack thread here: https://kubernetes.slack.com/archives/CH8KCCKA5/p1563362345098000

On a newly created cluster, with only default namespaces, I am attempting to install a new application in the demo namespace.

My workflow until now was to create a context pointing to the demo namespace and apply the resources. (i.e. DEMO_CONTEXT)

Example namespace resource

kind: Namespace
metadata:
  annotations: {}
  labels:
    name: demo
  name: demo

Error

When applying a simple namespace using kapp pointing to DEMO_CONTEXT, kapp fails with:

Error: Creating app: namespaces "demo" not found

Possible explanation

kapp seems unable to handle a context pointing to a yet-to-be-created namespace. Possibly because if needs to store some state there?

Workaround

using a context pointint to a different namespace, i.e. default, kapp is able to create the application successfully

Changes                                                                                                                                                                      
                                 
Namespace  Name  Kind       Conditions  Age  Changed  Ignored Reason     
-          demo  Namespace  -           -    add      -                                                              
                                                                                                                  
1 add, 0 delete, 0 update, 0 keep                                                                                      
                                                                                             
1 changes                                                                                                                 
                                                                                                                                          
2:44:26PM: --- applying changes                                                                                                              
2:44:26PM: add namespace/demo (v1) cluster                                               
2:44:26PM: waiting on add namespace/demo (v1) cluster                                                                                              
2:44:27PM: --- changes applied

Unfortunately, if after this bootstrap process we revert to using the original DEMO_CONTEXT, kapp finds a mismatch in the app ownership

cat  namespace.yml | kapp  deploy --kubeconfig-context demo_context -a pre-deploy -f - -y --tty    
Error: Ownership errors:                                                                                                                                                    
- Resource 'namespace/demo (v1) cluster' is associated with a different label value 'kapp.k14s.io/app=1563371065560753000'

The only solution then becomes to force import the app --dangerous-override-ownership-of-existing-resources

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.