pelotech / drone-helm3 Goto Github PK
View Code? Open in Web Editor NEWPlugin for drone to deploy helm charts using helm3
License: Apache License 2.0
Plugin for drone to deploy helm charts using helm3
License: Apache License 2.0
Would like to have publishing of Helm repository https://helm.sh/docs/topics/registries/
I have currently implemented it already and now updating the documentation.
So I am trying to implement the helm registry publish support https://helm.sh/docs/topics/registries/, but now after my changes and after running the code, I could see the drone build is failing with this error:
Command Path: /usr/bin/helm Args: [registry login -u ****** -p ****** ******.azurecr.io]
Command Path: /usr/bin/helm Args: [registry logout ******.azurecr.io]
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error: Get https://registry-1.docker.io/v2/: unauthorized: incorrect username or password
while executing *run.Registry step: exit status 1
That error more of looks like the one you get when docker login fails, but acording to my logs, it looks that the steps are correct, but it happens on Execute and the part which baffles me is that the command is not actually called. Meaning that I have put the log just before it executes the cmd from the registry file and the log never prints.
Here is my repo where I am trying this stuff. (Please ignore my logging messages :) ) https://github.com/sherry-ummen/drone-helm3/tree/publish_to_helm_registry
Anyone could point out what could be wrong here?
I'm trying to install chart from repo, but don't get any success.
image: pelotech/drone-helm3
add_repos: repo=https://REPOURL
settings:
helm_command: upgrade
chart: repo/test
release: test
api_server:
from_secret: KUBERNETES_SERVICE_HOST
kubernetes_token:
from_secret: KUBE_TOKEN
error:
Error: repo repo not found
Currently, all logging is done with fmt.Printf
(or .Println
, .Fprintf
, etc.). That's fine for a first pass, but we should really use the log
package or something like it. Printf output can pollute the test output, and it would be great to be able to say logger.Debug("...")
instead of
if cfg.Debug {
fmt.Fprint(cfg.Stderr, "...\n")
}
The problem I'm trying to solve:
Update helm to 3.1.1
The problem I'm trying to solve:
Some of the configuration fields' names aren't great. helm_command
, is a leaky abstraction, for example. Most of them are for drone-helm2 backwards-compatibility; some might just be because I didn't think the names through.
How I imagine it working:
In internal/helm.NewConfig
, look for environment variables with alternate names for some settings. For example, we could allow PLUGIN_MODE
or maybe PLUGIN_OPERATION
in place of PLUGIN_HELM_COMMAND
.
The docs should probably note the "better" name as the main setting name, and note the "worse" name as a backwards-compatibility alias.
We'll need to figure out what to do if both versions are present--error, default to one or the other, something else...?
internal/run/delete.go
that can run helm delete
internal/helm/plan.NewPlan
, if cfg.Command
is "delete", or cfg.Command
is ""
and cfg.DroneEvent
is "delete"
,internal/helm/plan.NewPlan
,
run.InitKube
run.Lint
p.steps
The built binary is supposed to be .gitignore
d, but has been inadvertently added. It's just short of 4 MB, and since git doesn't do a very good job of managing large files, leaving it committed will have an outsized impact on clone
/fetch
operations.
helm2's delete
command had a --purge
option that would purge Helm's release ledger. In helm3 that behavior is the default; if you want to keep the ledger you need to supply --keep-history
.
I'm not sure whether this should be on by default.
On the one hand, drone-helm had a purge
setting, but it doesn't appear to be functional (it's omitted from the code that loads env vars into the config). So anything that was using delete was getting the non-purge behavior. If we want drone-helm3 to be fully backwards-compatible with drone-helm, our keep_history setting should default to true
.
However, the fact that Helm 3 made purging the default behavior means "don't keep history" is probably the appropriate default in general. In that case we should follow their lead and make our keep_history default to false
.
If we decide to make false
the default, remove this issue from the Version 1.0 milestone.
Currently, if internal/helm/determineSteps
can't figure out what to do, it simply panics. It should return the help
step instead. It should be as simple as replacing the panic with return &help
(and adding a test).
Hi, Does it support publishing helm charts to some public repository example Azure Container Registry?
The Config
struct in internal/helm/config.go defines the interface between drone-helm3
and a project's Drone settings. As such, it should be clear and well-documented:
Config
struct makes sense to you—it makes sense to me, but I'm a poor judge since I already know all the things it's telling me.helm upgrade
" comments; they aren't particularly accuratehelmCommand
; that responsibility lies with the determineSteps
function in internal/helm/plan.go
.inthe
typo on line 8 🙃What needs explanation:
We have quite a few images up on dockerhub now, which means when someone opens an issue it's not immediately clear which version they're using. Let's put something in the "bug" template that prompts them to mention it. Drone version wouldn't hurt, either.
Github repos can provide configuration with some special files. README.md is the most prominent, but there are several others that may be useful here:
CODEOWNERS
automatically adds reviewers to PRs. I'd like it to have at least one person other than myself; @josmo @kav @grinnellian any volunteers?CONTRIBUTING.md
tells people about workflow guidelines (e.g. "all pull requests should be linked to an issue"). I have Opinions about how git commits should be structured, but I won't try to impose them unless someone says "yeah, let's consider rules about branches and commits".
CODE_OF_CONDUCT.md
tells people about un/acceptable behavior. I suggest the Contributor Covenant.Let's put one or more such files in a .github/
folder.
Currently, internal/run.InitKube.Prepare
returns an error if its Certificate
field is empty (unless SkipTLSVerify
is true). That was an error on my part! The kubeconfig only needs a certificate-authority-data
field if the cluster CA is using a self-signed certificate.
internal/run/initkube.go
, remove the non-empty Certificate assertion (and the associated test).kubeconfig
, add an if .Certificate
to the else clause that adds a certificate-authority-data
field.internal/run/lint.go
that can run helm lint
internal/helm/plan.NewPlan
, if cfg.Command
is lint
, instantiate a run.Lint
and put it in p.steps
.
run.InitKube
; I'm not sure whether Helm's lint command needs a kubeconfig
.If a .env
file is present, use https://github.com/joho/godotenv to load it into the environment before calling envconfig.Process
.
There is related code in drone-helm, but the code for drone-helm3
will need to work differently since we aren't using the urfave/cli library.
What I tried to do:
Documents for upgrade drone-helm to drone-helm3
What happened:
Nothing
More info:
Should have links to https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/
This is sort of the inverse of #10—helm3 probably added new commands and/or flags, and it may be worthwhile to implement some of them. Compare helm3's CLI options to helm2 and decide whether this plugin should have support for any of the additions. If so, make github issues for "implement such-and-such feature."
The problem I'm trying to solve:
To have reproducible builds it would be nice to have the option to install dependencies from Chart.lock.
How I imagine it working:
There is an option "update_dependencies" which can be used to avoid including dependencies in the source code. But this operation pulls a new version of dependencies according to Chart.yaml ranges.
This one runs helm dependency update
, I can imagine another one would run helm dependency build
internal/helm.Config
will need two new fields:
EKSCluster string `envconfig:"EKS_CLUSTER"`
EKSRoleARN string `envconfig:"EKS_ROLE_ARN"`
internal/run.InitKube
and .kubeValues
will need matching fields, so their values can be passed along to the kubeconfig
template.InitKube.Prepare
, if i.EKSCluster != ""
, i.Token
should not be mandatory (and should probably be forbidden).See also ipedrazas/drone-helm#80 for how this was implemented over there.
I just noticed a couple of problems in the tests for plan.go
:
defer ctrl.Finish()
after calling gomock.NewController
in TestNewPlan
and TestNewPlanAbortsOnError
.The Config struct defined in internal/run.Config
is meant to hold configuration that applies to all Steps. It currently has three fields, Values
, StringValues
, and ValuesFiles
, that don't meet that standard. When I originally wrote it I thought they'd apply to most everything, but in fact they're only used in two of the seven Steps we currently have defined.
Remove those fields from internal/run.Config
and add them to the Upgrade
and Lint
Steps.
drone-helm allows the drone chart to set a prefix to be used when looking up environment variables. For example, given this stanza:
environment:
prefix: FOO
foo_token: fjejkdkfjfj
It will look for the token setting in FOO_TOKEN
rather than TOKEN
.
We should retain support for that feature.
The prefix
setting in internal/helm/config.go exists because it existed in drone-helm. As I've learned more about drone usage, it's become clear it's not needed. It was meant to support a .drone.yml
stanza like this:
pipeline:
steps:
- name: deploy_staging
image: pelotech/drone-helm3
prefix: stage
secrets: [stage_kubernetes_token]
That secrets
syntax is deprecated in recent versions of drone, and might not work at all. A modern stanza would look like this:
pipeline:
steps:
- name: deploy_staging
image: pelotech/drone-helm3
kubernetes_token:
from_secret: stage_kubernetes_token
prefix
setting" section) from parameter_reference.mdGPLv3 is my goto, but Joachim has concerns about whether it's appropriate. Let's reach agreement on one.
In order to pass secrets to helm, a user might put the following in their .drone.yml
:
settings:
values: "ssl_key=${SSL_KEY}"
environment:
ssl_key:
from_secret: ssl_key
When reading config from the environment, check cfg.Values
and cfg.StringValues
against the pattern \$\{\w+\}
and substitute the corresponding environment variables. They need to respect the Prefix
setting, using the same semantics as for regular config: use the non-prefixed form if it's present, but the prefixed form should override the non-prefixed.
My drone-helm3 and drone versions:
drone:1.7.0
pelotech/drone-helm3:latest
What I tried to do:
We have an automatic nightly deployment that refreshes actual deployment with the same image tag, so we use force_upgrade
parameter to tell Helm to recreate pods even if the same tag is present
What happened:
When the nightly deployment triggers, everything works fine, but pods aren't restarted:
$ helm history website
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
160 Thu May 21 08:07:13 2020 superseded trevor-2.3.0 1.0 Upgrade complete # <- Latest CI/CD automatic deploy
161 Fri May 22 00:18:02 2020 deployed trevor-2.3.0 1.0 Upgrade complete # <- Nightly deploy
$ date
vie 22 may 2020 10:30:42 CES
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
website-78f6d94c79-trt58 2/2 Running 0 24h
website-78f6d94c79-vjbk5 2/2 Running 0 24h
As you can see, pods should have ~12h age if nightly deployment refreshed pods, but instead, it has ~24h, since latest CI/CD deployment.
As far as I know, force_upgrade
send helm
the --force
flag, which force resource updates through a replacement strategy
, am I wrong?
More info:
The .drone.yml
step:
- name: helm_deploy
image: pelotech/drone-helm3
settings:
mode: upgrade
chart: my/chart
force_upgrade: true
add_repos: my_repo=http://charts.example.com
namespace: ${DRONE_REPO_NAME}
release: ${DRONE_REPO_NAME}
kube_service_account: system:serviceaccount:helm:helm
values: istio.hosts[0]=www.example.com,fullnameOverride=${DRONE_REPO_NAME},image.repository=${DRONE_REPO_NAME}
values_files:
- ./cicd/templates/values.yaml
wait_for_upgrade: true
kube_api_server:
from_secret: production_api_server
kube_token:
from_secret: production_kubernetes_token
kube_certificate:
from_secret: production_k8s_ca
Also, values.yaml
have a pullPolicy: Always
configuration to force Kubernetes to always download the image even if same tag is provided.
This is the output of helm_deploy
step on Drone web UI:
"chart" has been added to your repositories
Release "website" has been upgraded. Happy Helming!
NAME: website
LAST DEPLOYED: Fri May 22 07:32:14 2020
NAMESPACE: website
STATUS: deployed
REVISION: 162
TEST SUITE: None
NOTES:
Get project URL by running these commands:
https://www.example.com
Thank you very much!
The problem I'm trying to solve:
If someone wants to pass --cleanup-on-fail
to helm upgrade
, they should be able to do that
How I imagine it working:
CleanupOnFail
field to internal/helm.Config
(be sure to set an envconfig tag)CleanupOnFail
field to internal/run.Upgrade
internal/helm.upgrade
passes the CleanupOnFail
field when creating the Upgrade
structUpgrade.Prepare
adds --cleanup-on-fail
to the helm args if CleanupOnFail
is trueThe problem I'm trying to solve:
If someone wants to pass --atomic
to helm upgrade
, they should be able to do that
How I imagine it working:
Atomic
field to internal/helm.Config
Atomic
field to internal/run.Upgrade
internal/helm.upgrade
passes the Atomic
field when creating the Upgrade
structUpgrade.Prepare
adds --atomic
to the helm args if Atomic
is true.drone.yml
to run the test
and lint
steps for new PRs.drone.yml
to publish an image to docker hub when a PR is mergedmaster
that requires CI to pass before merging a PR.drone.yml
stanza
image
drone setting will need to be a placeholderdrone exec
to build and deploy a plugin imageThe problem I'm trying to solve:
helm2's --timeout
flag specified a number of seconds for the timeout. In helm3 it uses a string formatted for golang's ParseDuration
function, e.g. "200s".
If someone is upgrading from drone-helm and they have a timeout: 200
in their settings, it won't do what they expect. I'm not sure whether helm will just exit with an error or set the timeout to 0, but either way the deployment won't succeed.
How I imagine it working:
I see two options:
s
so it means the same thing it did in helm2drone-helm has a handful of settings that correspond to helm2 commands/flags that don't exist in helm3. If we see those env vars, it probably means they were left over during an upgrade from drone-helm. drone-helm3 should emit a warning that advises the plugin consumer to remove those config settings. We don't want a user to experience the frustration of "I've set this setting, why isn't it being applied??"
Deprecated env vars:
PURGE
(for adding --purge
to helm delete
. helm3's delete
command has no --purge
flag)RECREATE_PODS
(for adding --recreate-pods
to helm upgrade
. helm3's upgrade
command has no --recreate-pods
flag)TILLER_NS
(Tiller setting)UPGRADE
(Tiller setting)CANARY_IMAGE
(Tiller setting)CLIENT_ONLY
(Tiller setting)STABLE_REPO_URL
(Tiller setting)Remember to look for both the prefixed and non-prefixed forms (e.g. $PURGE
and $PLUGIN_PURGE
).
The problem I'm trying to solve:
We've created a couple documentation issues recently (#45, #43, and now this one) and they don't fit naturally into either of the existing issue templates.
How I imagine it working:
Add a new file, .github/ISSUE_TEMPLATE/documentation.md
. It should have labels: documentation
in its yaml front matter; I'll leave it to the implementor to come up with a reasonable title/description/contents.
The problem I'm trying to solve:
In the parameter_reference's "where to put settings" section, we should mention how the code behaves when a setting appears in both the settings
and environment
stanzas: the value in environment
wins.
There's a test in config_test.go that demonstrates the behavior.
Currently, the help
step just calls helm help
and nothing more. It would be more useful if it gave usage information for drone-helm3 itself, either instead of the helm usage or in addition to it.
Some potentially-useful information to include:
command
setting or a DRONE_EVENT env var)delete
and lint
, even if #3 and #4 aren't complete)See also #15, which will create a circumstance in which someone might actually see the help
command's output 🙃
What happened:
Drone can't find path for Chart, I think drone went directly to find the path inside the container,bacause i setting wrong KUBE_API,but the drone shows that the path cannot be found, instead it can't connect to the KUBE_API_SERVER.
More info:
name: helm-deploy
image: pelotech/drone-helm3
environment:
api_server: http://masterIP:6443
kube_token:
from_secret: dev_kubernetes_token
settings:
skip_tls_verify: true
mode: upgrade
chart: ./helm
release: myapp
wait_for_upgrade: true
service_account: admin-user
valuesi_fiies: ["/helm/myapp.yaml"]
namespace: myapp
trigger:
event:
- tag
branch:
- master
In plan.go's upgrade
, lint
, and uninstall
functions, if config.UpdateDependencies
is true, there should be an additional Step that calls helm dependency update $chart
. It needs to happen before the main command, but I don't think it matters whether it happens before or after InitKube
.
The UpdateDependencies struct will look like Lint
, though a little simpler:
Chart
and cmd
fields.Prepare()
method should require a nonempty Chart
.--debug
.The problem I'm trying to solve:
Helm repositories can be configured to require authentication (typically using the HTTP Basic scheme). In order to access a protected repository, you can configure its URL like so: https://user:[email protected]
.
The credentials for a Helm repository may be stored as an environment variable. As Drone doesn't support interpolating arbitrary variables in its pipeline by default, the Helm plugin could do this instead.
How I imagine it working:
Leverage the existing arbitrary environment variable interpolation used for values strings, adding this functionality to the add_repos
configuration parameter.
What needs explanation:
In addition to parameter_reference.md
, there is a parameter reference in drone-plugin-index. If someone makes changes to the available settings, they need to make a corresponding change to that parameter reference. That's easy to forget.
Update .github/pull_request_template.md
to mention that config changes need to go over there as well.
There's a kubeconfig
file at the root of the repository. internal/run.InitKube
uses it to create a .kube/config
. Since it's a Go template, it's code; since it's code, it should have tests.
At minimum, we should verify that it's a syntactically-valid Go template. Ideally, there should be a few more:
i.template.Execute
populates the expected valuesi.template.Execute
produces syntactically-valid yamlif eq .SkipTLSVerify true
/else
if .Namespace
if .Token
/else if .EKSCluster
My drone-helm3 and drone versions:
drone 1.6.5, drone-helm3 probably latest, I didn't pin tag
What I tried to do:
helm installation
What happened:
It reports some crazy error:
Generated config: {Command:upgrade DroneEvent:push UpdateDependencies:false DependenciesAction: AddRepos:[] RepoCertificate: RepoCACertificate: Debug:true Values:image.tag="master-f51c9d45" StringValues: ValuesFiles:[] Namespace: KubeToken:(redacted) SkipTLSVerify:false Certificate:******** APIServer:******** ServiceAccount:deploy ChartVersion: DryRun:false Wait:false ReuseValues:false KeepHistory:false Timeout: Chart:chart Release:dijaspora Force:false AtomicUpgrade:false CleanupOnFail:false LintStrictly:false Stdout:0xc00008a008 Stderr:0xc00008a010}
2 | calling *run.InitKube.Prepare (step 0)
3 | loading kubeconfig template from /root/.kube/config.tpl
4 | creating kubeconfig file at /root/.kube/config
5 | calling *run.Upgrade.Prepare (step 1)
6 | Generated command: '/usr/bin/helm --debug upgrade --install --set image.tag="master-f51c9d45" dijaspora chart'
7 |
8 | calling *run.InitKube.Execute (step 0)
9 | writing kubeconfig file to /root/.kube/config
10 | calling *run.Upgrade.Execute (step 1)
11 | history.go:52: [debug] getting history for release dijaspora
12 | upgrade.go:82: [debug] preparing upgrade for dijaspora
13 | Error: UPGRADE FAILED: YAML parse error on dijaspora/templates/deployment.yaml: error converting YAML to JSON: yaml: line 29: did not find expected key
14 | helm.go:75: [debug] error converting YAML to JSON: yaml: line 29: did not find expected key
15 | YAML parse error on dijaspora/templates/deployment.yaml
16 | helm.sh/helm/v3/pkg/releaseutil.(*manifestFile).sort
17 | /home/circleci/helm.sh/helm/pkg/releaseutil/manifest_sorter.go:146
18 | helm.sh/helm/v3/pkg/releaseutil.SortManifests
19 | /home/circleci/helm.sh/helm/pkg/releaseutil/manifest_sorter.go:106
20 | helm.sh/helm/v3/pkg/action.(*Configuration).renderResources
21 | /home/circleci/helm.sh/helm/pkg/action/install.go:489
22 | helm.sh/helm/v3/pkg/action.(*Upgrade).prepareUpgrade
23 | /home/circleci/helm.sh/helm/pkg/action/upgrade.go:166
24 | helm.sh/helm/v3/pkg/action.(*Upgrade).Run
25 | /home/circleci/helm.sh/helm/pkg/action/upgrade.go:83
26 | main.newUpgradeCmd.func1
27 | /home/circleci/helm.sh/helm/cmd/helm/upgrade.go:136
28 | github.com/spf13/cobra.(*Command).execute
29 | /go/pkg/mod/github.com/spf13/[email protected]/command.go:826
30 | github.com/spf13/cobra.(*Command).ExecuteC
31 | /go/pkg/mod/github.com/spf13/[email protected]/command.go:914
32 | github.com/spf13/cobra.(*Command).Execute
33 | /go/pkg/mod/github.com/spf13/[email protected]/command.go:864
34 | main.main
35 | /home/circleci/helm.sh/helm/cmd/helm/helm.go:74
36 | runtime.main
37 | /usr/local/go/src/runtime/proc.go:203
38 | runtime.goexit
39 | /usr/local/go/src/runtime/asm_amd64.s:1357
40 | UPGRADE FAILED
41 | main.newUpgradeCmd.func1
42 | /home/circleci/helm.sh/helm/cmd/helm/upgrade.go:138
43 | github.com/spf13/cobra.(*Command).execute
44 | /go/pkg/mod/github.com/spf13/[email protected]/command.go:826
45 | github.com/spf13/cobra.(*Command).ExecuteC
46 | /go/pkg/mod/github.com/spf13/[email protected]/command.go:914
47 | github.com/spf13/cobra.(*Command).Execute
48 | /go/pkg/mod/github.com/spf13/[email protected]/command.go:864
49 | main.main
50 | /home/circleci/helm.sh/helm/cmd/helm/helm.go:74
51 | runtime.main
52 | /usr/local/go/src/runtime/proc.go:203
53 | runtime.goexit
54 | /usr/local/go/src/runtime/asm_amd64.s:1357
55 | while executing *run.Upgrade step: exit status 1
More info:
If I do "helm upgrade --install --set image.tag="master-f51c9d45" dijaspora chart" at my zsh, everything is fine. So there are no errors in deployment.yaml or anywhere, but I get false error using this plugin. Do you preload values.yaml at all? I mean helm should do this, but what is wrong here?
- name: deploy
image: pelotech/drone-helm3
settings:
kube_api_server:
from_secret: deploy_server
kube_certificate:
from_secret: deploy_cert
kube_token:
from_secret: deploy_token
service_account: deploy
mode: upgrade
chart: chart
release: dijaspora
I'm a big believer in treating warnings as errors. Sure, many warnings are false positives, but if a normal build has warnings, you're likely to overlook any new ones that indicate a real problem.
For the sake of the -Werr
aficionados out there, add a lint_strictly
setting¹ that sends the --strict
flag to helm lint
.
¹ Just calling it strict
might be ok, but the struct field in internal/helm.Config
should definitely have "Lint" in the name somewhere since the setting is specific to that command.
AddRepo
Step that can call helm repo add $name $url
. There are no command-specific flags; it will need to at least pass the --debug
global flag (and maybe others).plan.go
, if config.Repos
is nonempty, add an AddRepo
for each repo in the list."name=url"
. See the drone-helm tests for an example.The problem I'm trying to solve:
It will be great to have a ca-file parameter in "helm repo add" for those who have a self-signed certificate in local repository (such as harbor).
Usually, I have to add a repo using the command "helm repo add my_repo https://my_repo.internal --ca-file ca.crt"
How I imagine it working:
Create a "ca-file" parameter to read from secret, for example.
steps:
Thanks!
I've been trying to follow the recommended versioning process for go modules. The auto_tag
setting in our .drone.yml ensures that the docker image gets the version number from the git tag, so that part is handled.
However, the current process for creating version tags is "after merging a pull request, remember to put a tag on the merge commit, then remember to push the tag." That's two "remember to"s too many. We should find something that can increment the version number automatically.
I haven't found a drone plugin that can do it for us, but I did find a github Action plugin that should be pretty easy to port to drone (it might work right out of the box, if the image is published somewhere). It looks for #major
, #minor
, or #patch
in any commit message since the previous tag and bumps the version accordingly. It can optionally do a patchlevel bump if nothing else is specified.
The current design is pretty idiosyncratic—it makes total sense to me, the person who wrote it, but someone coming in for the first time will probably find it bewildering.
There should probably be New${StepName}()
functions for the various Steps; they should either take the global config as an argument or use the .With${OptionName}
style.
The whole Step/Plan divide may not even be worthwhile; I wrote it in a mindset more appropriate to a large project with ongoing new-feature development.
internal/helm.Config
has a KubeConfig
field that specifies the destination for the kubernetes config file. It defaults to /root/.kube/config
(which is the default location for a kube config file in general).
It's in the drone-helm3 config because it was in the drone-helm config, but on further reading I don't think we actually need it. It's hard to imagine a circumstance where someone would need to configure that. I guess if their Drone workspace was /root/.kube
for some reason?
I don't think we need it for feature parity, either—although drone-helm will write the config file to the specified location, as far as I can tell it doesn't tell helm
to look there. It never sends the --kubeconfig
flag to helm, at least. Helm can use env vars in place of some of its CLI flags, but kubeconfig doesn't seem to be one of them (see the helm2 code here and the helm3 code here), so I don't think it's getting passed through from the environment
stanza in .drone.yml either.
So this issue is a two-parter:
kube_config
setting can result in a successful deployinternal/helm.Config
and internal/run.Config
, then fix test/compilation errors until nothing tries to say --kubeconfig
.Hi! :)
I'm running Drone in a new cluster and it's failing to deploy because of the permissions for the token I'm using. Before I was using the token for the Tiller service account, but now that Tiller is gone I was trying the "default" service account in the "kube-system" namespace. It doesn't have permissions for e.g. secrets so it can't deploy. What kind of service account can I use to deploy? Could you give me an example? Thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.