Code Monkey home page Code Monkey logo

swift's Introduction

Go Report Card Build Status codecov Docker Pulls Slack Twitter

Swift

Swift is an Ajax friendly Helm Tiller proxy using grpc-gateway.

Swift project is in maintainance mode. Helm 3 does not have a Tiller component and so there will be no need for something like Swift.

Supported Versions

Kubernetes 1.5+ . Helm Tiller server checks for version compatibility. Please pick a version of Swift that matches your Tiller server.

Swift Version Docs Helm/Tiller Version
v0.12.1 User Guide 2.14.0
0.11.1 User Guide 2.13.0
0.10.0 User Guide 2.12.0
0.9.0 User Guide 2.11.0
0.8.1 User Guide 2.9.0
0.7.3 User Guide 2.8.0
0.5.2 User Guide 2.7.0
0.3.2 User Guide 2.5.x, 2.6.x
0.2.0 User Guide 2.5.x, 2.6.x
0.1.0 User Guide 2.5.x, 2.6.x

Installation

To install Swift, please follow the guide here.

Using Swift

Want to learn how to use Swift? Please start here.

Contribution guidelines

Want to help improve Swift? Please start here.


Swift server collects anonymous usage statistics to help us learn how the software is being used and how we can improve it. To disable stats collection, run the operator with the flag --enable-analytics=false.


Support

We use Slack for public discussions. To chit chat with us or the rest of the community, join us in the AppsCode Slack team channel #general. To sign up, use our Slack inviter.

If you have found a bug with Searchlight or want to request for new features, please file an issue.

swift's People

Contributors

dependabot[bot] avatar diptadas avatar gloria-zhaoyun avatar joy717 avatar sajibcse68 avatar tahsinrahman avatar tamalsaha avatar the-redback avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

swift's Issues

Can not pass a range value as '--set' argument with swift proxy?

My test pod has multiple containers, so I need to pass a image version to each container,
the command when using helm client is like this:
/usr/local/bin/helm install --set images[0]=latest,deployment.annotation.annotations.buildNumber_0=latest,deployment.annotation.annotations.gitCommit_0=ffffffffff ./$app_name

it worked successfully.

But I install a chart with swift by posting a http request with the parameter like this:
{"values":{"raw":"{\"replicaCount\":2,\"images[0]\":\"2\",\"deployment.annotation.annotations.buildNumber_0\":\"latest\",\"deployment.annotation.annotations.gitCommit_0\":\"ffffffffff\"}"},"chart_url":"http://gitlab.dmall.com/arch/wolverine-app-charts/raw/nginx-test/nginx.tar.gz"}

there is an exception returned to me:
{"code":2,"message":"render error in \"nginx/templates/deployment.yaml\": template: nginx/templates/deployment.yaml:49:55: executing \"nginx/templates/deployment.yaml\" at \u003cindex $images $index\u003e: error calling index: index of untyped nil"}

My deployment.yaml :
containers: {{- $images := .Values.images -}} {{- range $index, $container := .Values.containers }} - name: {{ $container.name }}-{{ $index }} image: "{{ $container.image.repository }}:{{ index $images $index }}" imagePullPolicy: {{ $container.image.pullPolicy }} ......

it looks like tiller have not got a value with key 'images', maybe my request parameter was wrong? How can I pass a range value as the '--set' parameter to the tiller sever with swift proxy?

Thanks.

Timeout when installing chart which takes longer then 45 secs

We have done Swift setup using helm installation and we are using Swift Chart version swift-0.7.0 for our testing.

We see one issue when we try to install a chart which takes longer to finish our request get timed out.
Here are the relevant swift logs

I0511 10:00:40.896296 1 resolver_conn_wrapper.go:68] dialing to target with scheme: ""
I0511 10:00:40.896387 1 resolver_conn_wrapper.go:117] ccResolverWrapper: sending new addresses to cc: [{tiller-deploy.kube-system.svc:44134 0 }]
I0511 10:00:40.896401 1 clientconn.go:741] ClientConn switching balancer to "pick_first"
I0511 10:00:40.896443 1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc42037f8b0, CONNECTING
I0511 10:00:40.902847 1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc42037f8b0, READY
I0511 10:00:40.903014 1 chart.go:51] [Chart dir: /tmp/swift053391067]
I0511 10:00:40.903050 1 chart.go:60] [Chart url: http://XXXXXXX.tgz]
I0511 10:00:40.903086 1 chart.go:186] [Downloading http://XXXXXXXX.tgz]
I0511 10:00:40.917937 1 chart.go:268] [1867 bytes downloaded]
I0511 10:01:27.016064 1 handler.go:131] Failed to write response: write tcp 172.30.111.37:9855->172.30.187.247:39126: i/o timeout

Please note that if we install our chart using helm client , it works fine but we see this time out issues when using Swift .

Can you help us debug this issue? Why we see this timeout in 45 sec ?
After reading Swift documentation we are under impression that there is default time out of 5 min for tiller to respond back .

Let me know if you need more details !!

Support Helm 2.x.y

I think if we change the dependency to an old version this will work.

Can not lookup tiller host

I reinstall my swift deployment, and I install a release with swift, and got this error:

grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp: lookup tiller-deploy.kube-system on 192.168.9.10:53: no such host"; Reconnecting to {tiller-deploy.kube-system:44134 }

I think the swift cannot get a correct tiller service IP by looking up through a DNS.

the tiller correct host is: tiller-deploy.kube-system.svc.k8s.local

Why did swift connect to tiller using 'tiller-deploy.kube-system' host?

Thanks. :)

A question about 'Out of Memory'

Hello . you guys:

I have deploy a tiller and swift into my k8s cluster. It seem everything is OK.
But the swift pod will been killed about every 2 hours.
The kernel log located in path "/var/log" :
Apr 12 08:43:14 VM_133_12_centos kernel: Out of memory: Kill process 6593 (swift) score 1470 or sacrifice child Apr 12 08:43:14 VM_133_12_centos kernel: Out of memory: Kill process 6611 (swift) score 1470 or sacrifice child Apr 12 10:10:49 VM_133_12_centos kernel: Out of memory: Kill process 20516 (swift) score 1455 or sacrifice child Apr 12 10:56:59 VM_133_12_centos kernel: Out of memory: Kill process 11716 (swift) score 1457 or sacrifice child Apr 12 10:56:59 VM_133_12_centos kernel: Out of memory: Kill process 11737 (swift) score 1457 or sacrifice child Apr 12 11:43:03 VM_133_12_centos kernel: Out of memory: Kill process 27510 (swift) score 1457 or sacrifice child Apr 12 13:14:57 VM_133_12_centos kernel: Out of memory: Kill process 11044 (swift) score 1457 or sacrifice child Apr 12 13:14:57 VM_133_12_centos kernel: Out of memory: Kill process 11075 (swift) score 1458 or sacrifice child

Tiller version:
Client: &version.Version{SemVer:"v2.12.0", GitCommit:"d325d2a9c179b33af1a024cdb5a4472b6288016a", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}

Swift version
appscode/swift:0.11.0

Kubernetes version
Client Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.5-tke.3", GitCommit:"53e244be925234190938376fe8637189b6caf125", GitTreeState:"clean", BuildDate:"2018-12-04T04:10:03Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.5-tke.3", GitCommit:"53e244be925234190938376fe8637189b6caf125", GitTreeState:"clean", BuildDate:"2018-12-04T04:10:15Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

I will post the more clues as soon as I find them.

Thanks.

Response Object for API Error

For api error, Front-end expects the format like,
response.messge or status.status but now error message in message directly e.g.

{
	"code": 2,
	"message": "a release named release1 already exists.\nRun: helm ls --all release1; to check the status of the release\nOr run: helm del --purge release1; to delete it"
}

Network is unreachable

Hello:
I had deployed swift successfully, pods and services has be start like this:

po/swift-1067258244-tcv43                  1/1       Running   0          15m       192.168.8.25    k8s-master1
po/tiller-deploy-12052176-k8jwb            1/1       Running   0          41m       192.168.8.22    k8s-master2

svc/swift                  192.168.9.232   <none>        9855/TCP,50055/TCP,56790/TCP   14m       app=swift
svc/tiller-deploy          192.168.9.242   <none>        44134/TCP                      40m       app=helm,name=tiller

and I visit
curl http://192.168.9.232:9855/tiller/v2/version/json
and it return like this :
{"code":2,"message":"Get https://192.168.9.1:443/api/v1/services?labelSelector=app%3Dhelm%2Cname%3Dtiller: dial tcp 192.168.9.1:443: connect: network is unreachable"}

I think the swift can not connect tiller correctly?

Add endpoint to get values from release

At the moment, the only way to get the values from a specific release is calling /tiller/v2/releases/my-release/content/json. It returns an object which contains:

  • release.chart.values.raw (The content of values.yaml)
  • release.config.raw (The values passed with -f or --set)

Both fields are strings containing the helm's yaml response.

It would be great to have an endpoint like /tiller/v2/releases/my-release/values/json?all=true&revision=X returning the values in JSON format.

can not create pvc

@tamalsaha
hi
I'll create pvc with wheel 1.0

this is my helm charts
https://github.com/orangesys/charts/tree/master/influxdb
can create pvc with helm

cd influxdb
helm install --name my-release --set retentionPolicy=40d,persistence.size=50Gi .

can not create pvc with wheel

http POST http://127.0.0.1:9855/tiller/v2/releases/rel222-i/json < influxdb.json

influxdb.json

{
        "chart_url": "https://github.com/orangesys/charts/raw/master/docs/influxdb-0.1.13.tgz",
        "values": {
                "raw": "{\"retentionPolicy\":\"40d\",\"persistence.size\":\"50Gi\"}"
        }
}

how to upgrade a release by swfit

Hi there,

is there rest api for upgrading a release? same function as helm cli: "helm upgrade my-app stable/my-app --version=0.1.10"

thanks

net/http: TLS handshake timeout

I see the following error while trying to install a chart from an artifactory helm repo
chart.go:257] Error while downloading https://<artifactoryurl>/<path to chart zip> net/http: TLS handshake timeout

The artifactory repo is inside the company intranet and so is the cluster where swift is deployed.
the POST body looks like
{ "chart_url": "https://<url>", "values": { "raw": "****" }, "namespace": "****", "username": "*****", "password": "*****" }

I am able to install the app from the repo directly using helm.

Please let me know if you need more details

Swift error: Failed to extract ServerMetadata from context

Hello,

I installed swift in my k8s cluster which is running Tiller version 2.7.0-rc1. When I try to call it I get the following error:

curl 10.42.166.10:9855/tiller/v2/version/json`
{"code":4,"message":"context deadline exceeded"}

The pod logs show this:

1016 13:45:17.336235       7 server.go:89] [PROXYSERVER] Sarting Proxy Server at port [::]:9855
I1016 13:45:17.336282       7 server.go:168] Registering endpoint: RegisterReleaseServiceHandlerFromEndpoint
I1016 13:45:17.336327       7 server.go:120] Registering server: *release.Server
2017/10/16 13:46:10 Failed to extract ServerMetadata from context
2017/10/16 13:46:17 Failed to extract ServerMetadata from context
2017/10/16 13:46:29 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp: operation was canceled"; Reconnecting to {tiller-deploy.kube-system:44134 <nil>}

So swift itself seems up and running, but the connection to tiller does not seem to work. The tiller endpoint (tiller-deploy.kube-system:44134) seems to be right.
What puzzles me though is the Failed to extract ServerMetadata from context part. Is this because I am running tiller 2.7.0-rc1 or is this not relevant?

FWIW, my local helm client runs just fine against tiller.

Thanks for reading,
-naivenut

build on osx

@tamalsaha
I'll build on osx, get error msg

not found meta directory

./hack/make.py                                                                                   master ✖ ✱
Downloading:  https://raw.githubusercontent.com/appscode/libbuild/master/libbuild.py
Using existing version:  github.appscode.libbuild.libbuild
./hack/make.sh
go get github.com/jteeuwen/go-bindata/...
go-bindata -ignore=\.go -ignore=\.DS_Store -mode=0644 -modtime=1453795200 -o meta/data.go -pkg meta meta/...
bindata: Failed to stat input path 'meta': lstat meta: no such file or directory
Generating server protobuf files
~/go/src/github.com/appscode/wheel/_proto/hack ~/go/src/github.com/appscode/wheel/_proto
~/go/src/github.com/appscode/wheel/_proto
~/go/src/github.com/appscode/wheel/_proto/wapi ~/go/src/github.com/appscode/wheel/_proto
~/go/src/github.com/appscode/wheel/_proto/wapi/v2 ~/go/src/github.com/appscode/wheel/_proto/wapi ~/go/src/github.com/appscode/wheel/_proto
/Users/gavin/go/src/github.com/googleapis/googleapis/: warning: directory does not exist.
/Users/gavin/go/src/github.com/grpc-ecosystem/grpc-gateway/third_party/appscodeapis: warning: directory does not exist.
hapi/release/test_run.proto: File not found.
google/api/annotations.proto: File not found.
appscode/api/annotations.proto: File not found.
tiller.proto: Import "hapi/release/test_run.proto" was not found or had errors.
tiller.proto: Import "google/api/annotations.proto" was not found or had errors.
tiller.proto: Import "appscode/api/annotations.proto" was not found or had errors.

err with too many colons

How can I solve this error showed as bellow. My tiller version is 2.7.2 swift version 0.5.2, and after go build it , run with "/swift run --v=3 --connector=direct --tiller-endpoint=http://10.244.3.172:44134" It is defferent in v0.5.2 of the endpoint-address ?

2018/06/08 07:54:16 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp: address http://10.244.3.172:44134: too many colons in address"; Reconnecting to {http://10.244.3.172:44134 }
2018/06/08 07:54:17 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp: address http://10.244.3.172:44134: too many colons in address"; Reconnecting to {http://10.244.3.172:44134 }
2018/06/08 07:54:18 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp: address http://10.244.3.172:44134: too many colons in address"; Reconnecting to {http://10.244.3.172:44134 }
2018/06/08 07:54:50 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp: address http://10.244.3.172:44134: too many colons in address"; Reconnecting to {http://10.244.3.172:44134 }
2018/06/08 07:54:51 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp: address http://10.244.3.172:44134: too many colons in address"; Reconnecting to {http://10.244.3.172:44134 }
2018/06/08 07:54:53 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp: address http://10.244.3.172:44134: too many colons in address"; Reconnecting to {http://10.244.3.172:44134 }
2018/06/08 07:54:55 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp: address http://10.244.3.172:44134: too many colons in address"; Reconnecting to {http://10.244.3.172:44134 }
2018/06/08 07:54:56 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp: address http://10.244.3.172:44134: too many colons in address"; Reconnecting to {http://10.244.3.172:44134 }
2018/06/08 07:54:58 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp: address http://10.244.3.172:44134: too many colons in address"; Reconnecting to {http://10.244.3.172:44134 }
2018/06/08 07:55:00 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp: address http://10.244.3.172:44134: too many colons in address"; Reconnecting to {http://10.244.3.172:44134 }

grpc: received message larger than max error

If followed below code (refer to swift-0.11.1),
the size of received massages shouldn't be a problem.

But, we've got below error message for more than 20MB size messages.
Does swift support 20MB message ?

swift code

// maxReceiveMsgSize uses 20MB as the default message size limit.
// the gRPC's default size is 4MB.
// Since Tiller has been change the messages' size to 20MB, so we should make this value to 20MB.
maxReceiveMsgSize = 1024 * 1024 * 20

error message

"message":"grpc: received message larger than max (4693470 vs. 4194304)

Connection leak?

I have swift and tiller running and make a request to swift every 3 seconds. If I let this run for like 30 minutes, the tiller and swift containers have their memory growing without bounds.
The reason for this seems to be that the number of connections between swift and tiller is continously growing.

Connection count already over 1800 and continously growing:

~ $ netstat | wc -l
1817
~ $ netstat | wc -l
1822
~ $ netstat | wc -l
1826

Excerpt from netstat connections (all target tiller:44134):

tcp        0      0 swift-7559bd6fcd-8qwwg:35734 tiller-deploy.kube-system.svc.cluster.local:44134 ESTABLISHED
tcp        0      0 swift-7559bd6fcd-8qwwg:58532 tiller-deploy.kube-system.svc.cluster.local:44134 ESTABLISHED
tcp        0      0 swift-7559bd6fcd-8qwwg:50310 tiller-deploy.kube-system.svc.cluster.local:44134 ESTABLISHED

I guess the connections are not properly closed?

Does the latest swift still support ssl?

My k8s cluster version is v1.15.3, I want to use helm-v2.14.3 to install swift-v0.12.1. Without ssl, everything is ok.
But when I config ssl in swift command args 【like this https://appscode.com/products/swift/v0.12.1/guides/security/】, the swift pod is running with error message "certificate signed by unknown authority" . The full log below:
swift.log
Even I use /etc/kubernetes/ssl/ca.key&ca.crt as ca&key, or change env to k8s-v1.16.8 & helm-v2.16.1, swift pod still report the same error. What's wrong with it?
Which is the latest swift version that can support ssl, and what are the compatible k8s and helm versions?

Dependabot couldn't find a Gopkg.toml for this project

Dependabot couldn't find a Gopkg.toml for this project.

Dependabot requires a Gopkg.toml to evaluate your project's current Go dependencies. It had expected to find one at the path: /Gopkg.toml.

If this isn't a Go project, or if it is a library, you may wish to disable updates for it from within Dependabot.

You can mention @dependabot in the comments below to contact the Dependabot team.

Dependabot couldn't find a Gopkg.toml for this project

Dependabot couldn't find a Gopkg.toml for this project.

Dependabot requires a Gopkg.toml to evaluate your project's current Go dependencies. It had expected to find one at the path: /Gopkg.toml.

If this isn't a Go project, or if it is a library, you may wish to disable updates for it from within Dependabot.

You can mention @dependabot in the comments below to contact the Dependabot team.

Does swift do authentication?

Hi, This is just a question.
I'm wondering if swift includes authentication for requests. Without swift users need to provide credentials in the kubeconfig to use helm. It seems like swift makes the tiller service open to the world. Did I misunderstand anything?

Thanks and regards,

unable to setup tiller-tls on 0.7.2

Hi!

I'm trying to make swift work with a tls enabled tiller without luck. After adding the following parameters to swift run:

        - --tiller-ca-file=/etc/swift-certs/ca.pem
        - --tiller-client-cert-file=/etc/swift-certs/cert.pem
        - --tiller-client-private-key-file=/etc/swift-certs/key.pem
        - --tiller-insecure-skip-verify

and having tiller tls secured (helm with --tls works flawlessly), if a curl swift using http I get:

*   Trying 100.69.114.61...
* Connected to swift.kube-system.svc (100.69.114.61) port 80 (#0)
> GET /tiller/v2/version/json HTTP/1.1
> Host: swift.kube-system.svc
> User-Agent: curl/7.47.0
> Accept: */*
> 
< HTTP/1.1 503 Service Unavailable
< Access-Control-Allow-Methods: POST,GET,OPTIONS,PUT,DELETE
< Access-Control-Allow-Origin: *
< Content-Type: application/json
< X-Content-Type-Options: nosniff
< Date: Fri, 09 Mar 2018 11:12:26 GMT
< Content-Length: 44
< 
* Connection #0 to host swift.kube-system.svc left intact
{"code":14,"message":"transport is closing"}

swift logs:

I0309 11:16:23.274773       1 resolver_conn_wrapper.go:68] dialing to target with scheme: ""
I0309 11:16:23.274832       1 resolver_conn_wrapper.go:117] ccResolverWrapper: sending new addresses to cc: [{tiller-deploy.kube-system:44134 0  <nil>}]
I0309 11:16:23.274847       1 clientconn.go:741] ClientConn switching balancer to "pick_first"
I0309 11:16:23.274893       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4202703a0, CONNECTING
I0309 11:16:23.276677       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4202703a0, READY
I0309 11:16:23.276931       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4202703a0, TRANSIENT_FAILURE
I0309 11:16:23.276954       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4202703a0, CONNECTING
I0309 11:16:23.276963       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4202703a0, TRANSIENT_FAILURE
W0309 11:16:23.276931       1 server_interceptors.go:97] {"error":"rpc error: code = Unavailable desc = transport is closing","grpc.code":"Unavailable","grpc.method":"GetVersion","grpc.service":"hapi.services.tiller.ReleaseService","grpc.time_ms":0.215,"level":"warning","msg":"finished client unary call","span.kind":"client","system":"grpc"}
W0309 11:16:23.277096       1 clientconn.go:1277] grpc: addrConn.transportMonitor exits due to: context canceled

and using https I get:

*   Trying 100.69.114.61...
* Connected to swift.kube-system.svc (100.69.114.61) port 443 (#0)
* found 148 certificates in /etc/ssl/certs/ca-certificates.crt
* found 592 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* SSL connection using TLS1.2 / ECDHE_RSA_AES_256_GCM_SHA384
* 	 server certificate verification SKIPPED
* 	 server certificate status verification SKIPPED
* 	 common name: swift (does not match 'swift.kube-system.svc')
* 	 server certificate expiration date OK
* 	 server certificate activation date OK
* 	 certificate public key: RSA
* 	 certificate version: #1
* 	 subject: CN=swift
* 	 start date: Fri, 09 Mar 2018 08:48:39 GMT
* 	 expire date: Thu, 04 Mar 2038 08:48:39 GMT
* 	 issuer: CN=tiller
* 	 compression: NULL
* ALPN, server accepted to use http/1.1
> GET /tiller/v2/version/json HTTP/1.1
> Host: swift.kube-system.svc
> User-Agent: curl/7.47.0
> Accept: */*
> 
< HTTP/1.1 503 Service Unavailable
< Content-Type: application/json
< Date: Fri, 09 Mar 2018 11:14:48 GMT
< Content-Length: 60
< 
* Connection #0 to host swift.kube-system.svc left intact
{"code":14,"message":"all SubConns are in TransientFailure"}

and swift logs nothing.

Also I found that the documentation for securing the connection (https://appscode.com/products/swift/0.7.2/guides/security/) specifies the parameters
--tiller-cacert-file string File containing CA certificate for Tiller server
--tiller-client-cert-file string File container client TLS certificate for Tiller server
--tiller-client-key-file string File containing client TLS private key for Tiller server

which doesn't exist (they should be --tiller-ca-file, --tiller-client-cert-file, - --tiller-client-private-key-file)

Get release history seems not work well

I had deploy swift on k8s and encounter some problem:
Every endpoint works well, but the Release history endpoint always return empty json data {},while I use helm client directy , it return responding info.
My Tiller Version:

λ helm version
Client: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.8.0", GitCommit:"14af25f1de6832228539259b821949d20069a222", GitTreeState:"clean"}

Swift pod info:(with image: appscode/swift:0.6.0)

{
  "kind": "Deployment",
  "apiVersion": "extensions/v1beta1",
  "metadata": {
    "name": "xinge-swift-swift",
    "namespace": "kube-system",
    "selfLink": "/apis/extensions/v1beta1/namespaces/kube-system/deployments/xinge-swift-swift",
    "uid": "c33a1e58-0664-11e8-8074-00163e0af3dd",
    "resourceVersion": "2403043",
    "generation": 3,
    "creationTimestamp": "2018-01-31T08:57:27Z",
    "labels": {
      "app": "swift",
      "chart": "swift-0.3.0",
      "heritage": "Tiller",
      "release": "xinge-swift"
    },
    "annotations": {
      "deployment.kubernetes.io/revision": "3"
    }
  },
  "spec": {
    "replicas": 1,
    "selector": {
      "matchLabels": {
        "app": "swift",
        "release": "xinge-swift"
      }
    },
    "template": {
      "metadata": {
        "creationTimestamp": null,
        "labels": {
          "app": "swift",
          "release": "xinge-swift"
        }
      },
      "spec": {
        "volumes": [
          {
            "name": "chart-volume",
            "emptyDir": {}
          }
        ],
        "containers": [
          {
            "name": "swift",
            "image": "appscode/swift:0.6.0",
            "args": [
              "run",
              "--v=3",
              "--connector=incluster"
            ],
            "ports": [
              {
                "name": "pt",
                "containerPort": 9855,
                "protocol": "TCP"
              },
              {
                "name": "tls",
                "containerPort": 50055,
                "protocol": "TCP"
              },
              {
                "name": "ops",
                "containerPort": 56790,
                "protocol": "TCP"
              }
            ],
            "resources": {},
            "volumeMounts": [
              {
                "name": "chart-volume",
                "mountPath": "/tmp"
              }
            ],
            "terminationMessagePath": "/dev/termination-log",
            "terminationMessagePolicy": "File",
            "imagePullPolicy": "IfNotPresent"
          }
        ],
        "restartPolicy": "Always",
        "terminationGracePeriodSeconds": 30,
        "dnsPolicy": "ClusterFirst",
        "serviceAccountName": "xinge-swift-swift",
        "serviceAccount": "xinge-swift-swift",
        "securityContext": {},
        "schedulerName": "default-scheduler"
      }
    },
    "strategy": {
      "type": "RollingUpdate",
      "rollingUpdate": {
        "maxUnavailable": "25%",
        "maxSurge": "25%"
      }
    },
    "revisionHistoryLimit": 2,
    "progressDeadlineSeconds": 600
  },
  "status": {
    "observedGeneration": 3,
    "replicas": 1,
    "updatedReplicas": 1,
    "readyReplicas": 1,
    "availableReplicas": 1,
    "conditions": [
      {
        "type": "Available",
        "status": "True",
        "lastUpdateTime": "2018-01-31T08:57:29Z",
        "lastTransitionTime": "2018-01-31T08:57:29Z",
        "reason": "MinimumReplicasAvailable",
        "message": "Deployment has minimum availability."
      },
      {
        "type": "Progressing",
        "status": "True",
        "lastUpdateTime": "2018-02-05T09:38:14Z",
        "lastTransitionTime": "2018-01-31T08:57:27Z",
        "reason": "NewReplicaSetAvailable",
        "message": "ReplicaSet \"xinge-swift-swift-55d5b78447\" has successfully progressed."
      }
    ]
  }
}

kubeneters version:

λ kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.3-rancher1", GitCommit:"beb8311a9f114ba92558d8d771a81b7fb38422ae", GitTreeState:"clean", BuildDate:"2017-11-14T00:54:19Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Request with curl:

λ curl -X GET \
>   http://k8s.fxdayu.com/swift/tiller/v2/releases/test-release/json \
>   -H 'Authorization: Basic eGl*******pZnQ=' \
>   -H 'Cache-Control: no-cache' \
>   -H 'Postman-Token: 0ea146cd-21c0-205b-f6ed-8d412141da80'
{}

Request with helm client Directly:

λ helm list test-release
NAME            REVISION        UPDATED                         STATUS          CHART               NAMESPACE
test-release    3               Mon Feb  5 18:08:19 2018        DEPLOYED        test-chart-0.1.0    default
λ helm history test-release
REVISION        UPDATED                         STATUS          CHART                   DESCRIPTION
1               Mon Feb  5 17:45:20 2018        SUPERSEDED      test-chart-0.1.0        Install complete
2               Mon Feb  5 18:08:16 2018        SUPERSEDED      test-chart-0.1.0        Upgrade complete
3               Mon Feb  5 18:08:19 2018        DEPLOYED        test-chart-0.1.0        Upgrade complete

Is there anyone encountered the similar problem?
I'm not familiar with Golang, could anyone give some help, thanks very much.

Improve docs

  • How to install not a custom namespace?
  • How to install using custom config.yml
proxy:
  secretToken: mytoken
rbac:
   enabled: false

{
  "proxy": {
    "secretToken": "mytoken"
  },
  "rbac": {
    "enabled": false
  }
}

{
	"chart_url": "https://github.com/tamalsaha/test-chart/raw/master/test-chart-0.1.0.tgz",
	"values": {
		"raw": "{ \"proxy\": { \"secretToken\": \"mytoken\" }, \"rbac\": { \"enabled\": false } }"
	}
}

Support dry-run

Will swift support dry-run on install release or provided a way to render chart like helm template?

I really want swift to return the complete rendered chart content (like helm template), so we can do some checking before really apply the install, without starting a process to invoke helm template.

Failed to connect to tiller

I have deployed swift on minikube using this command mentioned in docs

$ curl -fsSL https://raw.githubusercontent.com/appscode/swift/0.7.3/hack/deploy/swift.sh \
    | bash

But curl -X GET http://192.168.99.100:31962/tiller/v2/version/json gives nothing
when i looked into pods logs it gave me this

$ kubectl logs swift-79dbccf59b-5fkfc --namespace=kube-system 
I0329 12:25:16.092129       1 logs.go:19] FLAG: --alsologtostderr="false"
I0329 12:25:16.099083       1 logs.go:19] FLAG: --analytics="true"
I0329 12:25:16.100008       1 logs.go:19] FLAG: --api-domain=""
I0329 12:25:16.100060       1 logs.go:19] FLAG: --connector="incluster"
I0329 12:25:16.100079       1 logs.go:19] FLAG: --cors-origin-allow-subdomain="true"
I0329 12:25:16.100149       1 logs.go:19] FLAG: --cors-origin-host="*"
I0329 12:25:16.101998       1 logs.go:19] FLAG: --enable-cors="false"
I0329 12:25:16.102039       1 logs.go:19] FLAG: --help="false"
I0329 12:25:16.102055       1 logs.go:19] FLAG: --kube-context=""
I0329 12:25:16.102070       1 logs.go:19] FLAG: --log-rpc="false"
I0329 12:25:16.102087       1 logs.go:19] FLAG: --log_backtrace_at=":0"
I0329 12:25:16.102101       1 logs.go:19] FLAG: --log_dir=""
I0329 12:25:16.102116       1 logs.go:19] FLAG: --logtostderr="false"
I0329 12:25:16.102130       1 logs.go:19] FLAG: --plaintext-addr=":9855"
I0329 12:25:16.102144       1 logs.go:19] FLAG: --secure-addr=":50055"
I0329 12:25:16.102158       1 logs.go:19] FLAG: --stderrthreshold="0"
I0329 12:25:16.102172       1 logs.go:19] FLAG: --tiller-ca-file=""
I0329 12:25:16.102187       1 logs.go:19] FLAG: --tiller-client-cert-file=""
I0329 12:25:16.102202       1 logs.go:19] FLAG: --tiller-client-private-key-file=""
I0329 12:25:16.102216       1 logs.go:19] FLAG: --tiller-endpoint=""
I0329 12:25:16.102230       1 logs.go:19] FLAG: --tiller-insecure-skip-verify="true"
I0329 12:25:16.102247       1 logs.go:19] FLAG: --tiller-timeout="5m0s"
I0329 12:25:16.102261       1 logs.go:19] FLAG: --tls-ca-file=""
I0329 12:25:16.102275       1 logs.go:19] FLAG: --tls-cert-file=""
I0329 12:25:16.102289       1 logs.go:19] FLAG: --tls-private-key-file=""
I0329 12:25:16.102303       1 logs.go:19] FLAG: --v="3"
I0329 12:25:16.102318       1 logs.go:19] FLAG: --vmodule=""
I0329 12:25:47.011335       1 gateway.go:33] Registering grpc-gateway endpoint: RegisterReleaseServiceHandlerFromEndpoint
I0329 12:25:47.011339       1 server.go:46] [[GRPCSERVER] Starting gRPC Server at addr [::]:9855]
I0329 12:25:47.011366       1 resolver_conn_wrapper.go:68] dialing to target with scheme: ""
I0329 12:25:47.011364       1 grpc.go:29] Registering grpc server: *release.Server
I0329 12:25:47.011395       1 server.go:51] [[PROXYSERVER] Sarting Proxy Server at port [::]:9855]
I0329 12:25:47.011423       1 resolver_conn_wrapper.go:117] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:9855 0  <nil>}]
I0329 12:25:47.011434       1 clientconn.go:741] ClientConn switching balancer to "pick_first"
I0329 12:25:47.011541       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc42034cfb0, CONNECTING
I0329 12:25:47.011797       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc42034cfb0, READY
I0329 12:31:34.718936       1 proto_errors.go:49] Failed to extract ServerMetadata from context
I0329 12:32:03.078105       1 proto_errors.go:49] Failed to extract ServerMetadata from context
I0329 12:34:15.351107       1 proto_errors.go:49] Failed to extract ServerMetadata from context
I0329 12:34:39.279547       1 resolver_conn_wrapper.go:68] dialing to target with scheme: ""
I0329 12:34:39.279616       1 resolver_conn_wrapper.go:117] ccResolverWrapper: sending new addresses to cc: [{tiller-deploy:44134 0  <nil>}]
I0329 12:34:39.279627       1 clientconn.go:741] ClientConn switching balancer to "pick_first"
I0329 12:34:39.279683       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4202dc270, CONNECTING
W0329 12:34:59.279955       1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy:44134 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: i/o timeout". Reconnecting...
I0329 12:34:59.280232       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4202dc270, TRANSIENT_FAILURE
I0329 12:34:59.280524       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4202dc270, CONNECTING
W0329 12:35:19.281162       1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy:44134 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: i/o timeout". Reconnecting...
I0329 12:35:19.281239       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4202dc270, TRANSIENT_FAILURE
I0329 12:35:19.281511       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4202dc270, CONNECTING
W0329 12:35:39.281634       1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy:44134 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: i/o timeout". Reconnecting...
I0329 12:35:39.281698       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4202dc270, TRANSIENT_FAILURE
I0329 12:35:39.281981       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4202dc270, CONNECTING
W0329 12:35:59.282383       1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy:44134 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: i/o timeout". Reconnecting...
I0329 12:35:59.282439       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4202dc270, TRANSIENT_FAILURE
I0329 12:35:59.282570       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4202dc270, CONNECTING
W0329 12:36:19.282990       1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy:44134 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: lookup tiller-deploy on 10.96.0.10:53: dial udp 10.96.0.10:53: i/o timeout". Reconnecting...
I0329 12:36:19.283042       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4202dc270, TRANSIENT_FAILURE
I0329 12:36:19.283358       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4202dc270, CONNECTING
W0329 12:36:39.283620       1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy:44134 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: i/o timeout". Reconnecting...
I0329 12:36:39.283671       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4202dc270, TRANSIENT_FAILURE
I0329 12:36:39.283946       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4202dc270, CONNECTING
W0329 12:36:59.284239       1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy:44134 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: i/o timeout". Reconnecting...
I0329 12:36:59.284328       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4202dc270, TRANSIENT_FAILURE
I0329 12:36:59.284573       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4202dc270, CONNECTING
W0329 12:37:24.496705       1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy:44134 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: i/o timeout". Reconnecting...
I0329 12:37:24.496794       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4202dc270, TRANSIENT_FAILURE
I0329 12:37:24.497101       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4202dc270, CONNECTING
W0329 12:38:04.506189       1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy:44134 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: lookup tiller-deploy on 10.96.0.10:53: read udp 172.17.0.7:41994->10.96.0.10:53: i/o timeout". Reconnecting...
I0329 12:38:04.506243       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4202dc270, TRANSIENT_FAILURE
I0329 12:38:09.930302       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4202dc270, CONNECTING
W0329 12:38:49.948547       1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy:44134 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: lookup tiller-deploy on 10.96.0.10:53: read udp 172.17.0.7:33373->10.96.0.10:53: i/o timeout". Reconnecting...
I0329 12:38:49.949292       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4202dc270, TRANSIENT_FAILURE
I0329 12:39:24.949964       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4202dc270, CONNECTING
W0329 12:39:39.279975       1 clientconn.go:830] Failed to dial tiller-deploy:44134: grpc: the connection is closing; please retry.
I0329 12:43:22.092350       1 resolver_conn_wrapper.go:68] dialing to target with scheme: ""
I0329 12:43:22.092389       1 resolver_conn_wrapper.go:117] ccResolverWrapper: sending new addresses to cc: [{tiller-deploy:44134 0  <nil>}]
I0329 12:43:22.092399       1 clientconn.go:741] ClientConn switching balancer to "pick_first"
I0329 12:43:22.092431       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4203144b0, CONNECTING
I0329 12:43:36.465926       1 resolver_conn_wrapper.go:68] dialing to target with scheme: ""
I0329 12:43:36.465992       1 resolver_conn_wrapper.go:117] ccResolverWrapper: sending new addresses to cc: [{tiller-deploy:44134 0  <nil>}]
I0329 12:43:36.466003       1 clientconn.go:741] ClientConn switching balancer to "pick_first"
I0329 12:43:36.466074       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc420269d70, CONNECTING
W0329 12:43:42.093439       1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy:44134 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: lookup tiller-deploy on 10.96.0.10:53: dial udp 10.96.0.10:53: i/o timeout". Reconnecting...
I0329 12:43:42.093477       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc420269d70, TRANSIENT_FAILURE
W0329 12:43:42.093509       1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy:44134 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: i/o timeout". Reconnecting...
I0329 12:43:42.093522       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4203144b0, TRANSIENT_FAILURE
I0329 12:43:42.093744       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc420269d70, CONNECTING
I0329 12:43:42.093756       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4203144b0, CONNECTING
I0329 12:43:45.698812       1 resolver_conn_wrapper.go:68] dialing to target with scheme: ""
I0329 12:43:45.698884       1 resolver_conn_wrapper.go:117] ccResolverWrapper: sending new addresses to cc: [{tiller-deploy:44134 0  <nil>}]
I0329 12:43:45.698917       1 clientconn.go:741] ClientConn switching balancer to "pick_first"
I0329 12:43:45.698984       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc420471130, CONNECTING
W0329 12:44:02.094059       1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy:44134 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: i/o timeout". Reconnecting...
W0329 12:44:02.094071       1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy:44134 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: i/o timeout". Reconnecting...
I0329 12:44:02.094096       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc420269d70, TRANSIENT_FAILURE
I0329 12:44:02.094104       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4203144b0, TRANSIENT_FAILURE
I0329 12:44:02.094449       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4203144b0, CONNECTING
W0329 12:44:02.094622       1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy:44134 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: lookup tiller-deploy on 10.96.0.10:53: dial udp 10.96.0.10:53: i/o timeout". Reconnecting...
I0329 12:44:02.094668       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc420471130, TRANSIENT_FAILURE
I0329 12:44:02.094816       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc420269d70, CONNECTING
I0329 12:44:02.094925       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc420471130, CONNECTING
W0329 12:44:22.094483       1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy:44134 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: i/o timeout". Reconnecting...
I0329 12:44:22.094630       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4203144b0, TRANSIENT_FAILURE
W0329 12:44:22.094788       1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy:44134 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: i/o timeout". Reconnecting...
I0329 12:44:22.094853       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc420269d70, TRANSIENT_FAILURE
I0329 12:44:22.094959       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4203144b0, CONNECTING
W0329 12:44:22.095087       1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy:44134 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: i/o timeout". Reconnecting...
I0329 12:44:22.095297       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc420471130, TRANSIENT_FAILURE
I0329 12:44:22.095309       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc420471130, CONNECTING
I0329 12:44:22.095332       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc420269d70, CONNECTING
W0329 12:44:42.095300       1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy:44134 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: i/o timeout". Reconnecting...
I0329 12:44:42.095363       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc420471130, TRANSIENT_FAILURE
W0329 12:44:42.095379       1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy:44134 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: lookup tiller-deploy on 10.96.0.10:53: dial udp 10.96.0.10:53: i/o timeout". Reconnecting...
I0329 12:44:42.095397       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4203144b0, TRANSIENT_FAILURE
W0329 12:44:42.095552       1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy:44134 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: lookup tiller-deploy on 10.96.0.10:53: dial udp 10.96.0.10:53: i/o timeout". Reconnecting...
I0329 12:44:42.095583       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc420269d70, TRANSIENT_FAILURE
I0329 12:44:42.095596       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc420471130, CONNECTING
I0329 12:44:42.095709       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc420269d70, CONNECTING
I0329 12:44:42.095736       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4203144b0, CONNECTING
W0329 12:45:02.095915       1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy:44134 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: i/o timeout". Reconnecting...
W0329 12:45:02.095948       1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy:44134 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: lookup tiller-deploy on 10.96.0.10:53: dial udp 10.96.0.10:53: i/o timeout". Reconnecting...
I0329 12:45:02.095971       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4203144b0, TRANSIENT_FAILURE
I0329 12:45:02.095988       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc420269d70, TRANSIENT_FAILURE
W0329 12:45:02.095992       1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy:44134 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: i/o timeout". Reconnecting...
I0329 12:45:02.096007       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc420471130, TRANSIENT_FAILURE
I0329 12:45:02.096307       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4203144b0, CONNECTING
I0329 12:45:02.096355       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc420471130, CONNECTING
I0329 12:45:02.096381       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc420269d70, CONNECTING
W0329 12:45:22.097208       1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy:44134 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: lookup tiller-deploy on 10.96.0.10:53: dial udp 10.96.0.10:53: i/o timeout". Reconnecting...
W0329 12:45:22.097247       1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy:44134 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: lookup tiller-deploy on 10.96.0.10:53: dial udp 10.96.0.10:53: i/o timeout". Reconnecting...
W0329 12:45:22.097269       1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy:44134 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: lookup tiller-deploy on 10.96.0.10:53: dial udp 10.96.0.10:53: i/o timeout". Reconnecting...
I0329 12:45:22.097284       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc420269d70, TRANSIENT_FAILURE
I0329 12:45:22.097297       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc420471130, TRANSIENT_FAILURE
I0329 12:45:22.097307       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4203144b0, TRANSIENT_FAILURE
I0329 12:45:22.097509       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc420269d70, CONNECTING
I0329 12:45:22.097591       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc420471130, CONNECTING
I0329 12:45:22.097612       1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc4203144b0, CONNECTING

support for multi-namespace chart installation

I need to install helm chart other than default namespace but it seems swift does not support it. Here is what I have tried.

def install(chart_url,namespace,release_name):
   base_url='http://192.168.99.100:32747'
   headers={'content-type': 'application/json'}
   data=  {
                "chart_url": chart_url
           }

   response=requests.post(url=base_url+'/tiller/v2/releases/'+release_name+'/json?namespace='+namespace,headers=headers,json=data)
   print(response)
def main():
    install('stable/mongodb','testnamespace','mongodb')
if __name__=='__main__':
    main() 

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.