Code Monkey home page Code Monkey logo

helm's People

Contributors

6543 avatar anbraten avatar andrexus avatar antaanimosity avatar ashtonian avatar babykart avatar bracketjohn avatar crimsonfez avatar davidcurrie avatar eliasscosta avatar fpiesche avatar gapodo avatar genofire avatar iderr avatar jsoref avatar laszlocph avatar modulartaco avatar pat-s avatar pre-commit-ci[bot] avatar renovate[bot] avatar smuth4 avatar temikus avatar vquie avatar woodpecker-bot avatar wreiner avatar ymettier avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

helm's Issues

Rewrite the chart for woodpecker-server for a better UX

Clear and concise description of the problem

The helm chart currently requires manual creation of secrets, copying env vars, and knowing which options clash (i.e. 2 Forges simultaneously)

Suggested solution

Switching the woodpecker-server chart from "copy these env into your file" to just "set gitlab, gitea, github or whatever to true and add the values defined in the respective section".

This would mean providing simple settings in the values.yaml, possibly verifying those settings, and therefore abstracting the actual configuration from the user.

Implementing this would allow us to change parameters, option orders, etc. in the non-user facing part and therefore providing a hands-off upgrade approach on the chart based installations.

Alternative

In this case it is pretty much do it, or document it. Documentation might be useful, but does not solve the problem of issues / support effort when users upgrade in a breaking way

Additional context

It might not be strictly necessary, but I would be open to implement it.

Validations

`ErrImagePull` because the tag from `appVersion` doesn't actually exist

The Helm chart has tag: "" which Helm would fallback to appVersion, which currently resolves to 1.0.2. Unfortunately, on Docker Hub, the image is actually v1.0.2 โ€” so I had to override the tag in values.yaml.

The obvious solution(s) to me are:

  1. prepend v in Helm chart
  2. push both v1.0.2 and 1.0.2 to Docker Hub

I'm happy to contribute (1) if maintainers are OK with it, but if I were the maintainers, I probably would consider doing (2) just to remove possible edge cases for end user.

What do you think? ๐Ÿ˜ƒ

Server doesn't work (nicely) with multiple replicas

When enabling it, runs triggered in the UI are "pending forever" and are somehow not sent to the agents properly. It seems only one server pod is able to do that and if a user gets a session with the other one, the behavior described above occurs.

I am not meaning to infer it should work, just noting it down for visibility etc.

Killing a "wrong" pod and getting assigned a new session with the "correct" one will let subsequent restarts be triggered successfully.

Server pod errors out on fresh installation: could not setup service manager: forge not configured

I just started playing around with woodpecker on Kubernetes (k3s in my case).

However even without any user-provided values the current chart version leads to failing pods in my tests:

$ helm install woodpecker woodpecker/woodpecker
[...]
$ kubectl logs woodpecker-server-0
{"level":"info","time":"2024-06-18T08:37:45Z","message":"log level: info"}
{"level":"fatal","error":"can't setup globals: could not setup service manager: forge not configured","time":"2024-06-18T08:37:45Z","message":"error running server"}
$

Is there something that I need to prepare before installation? If so, that step is missing in the README (or I have completely missed it)...

Kind Regards,
Johannes

Disabling agent persistence renders invalid StatefulSet

Steps to reproduce

  • Use version 1.2.1 from https://woodpecker-ci.org/
  • Set woodpecker.agent.persistence.enabled to false

Description

While using defaults with agent persistence of 1GiB works fine out-of-the-box, trying to disable agent persistence results in an invalid StatefulSet, not spawning any agent pods and failing with the following error:

Warning FailedCreate 3m3s (x17 over 8m31s) statefulset-controller create Pod woodpecker-agent-0 in StatefulSet woodpecker-agent failed error: Pod "woodpecker-agent-0" is invalid: spec.containers[1].image: Required value`

See screenshot for comparison between default agent persistence and disabling agent persistence.
image

This is a result of the If-statement here https://github.com/woodpecker-ci/helm/blob/main/charts/woodpecker/charts/agent/templates/statefulset.yaml#L109 and especially the If-Else-statement further down here https://github.com/woodpecker-ci/helm/blob/main/charts/woodpecker/charts/agent/templates/statefulset.yaml#L133

I don't quite get what the point of the If-Else-statement is in the first place, since there won't be a volume mount at all, when agent persistence is disabled.

CI broken for helm charts Pull Requests since May 24th

I submitted 2 PR recently (after May 24th).

The CI that does pre-checks fails for some reasons I don't understand.

I can't find any CI success since May 24th.

If the CI is really broken, could you please repair it ?
If the CI is working, could you please review the following runs (and their PR) and explain what is wrong ?

Thanks in advance

Helm chart persistentVolume.existingClaim missing

Component

server

Describe the bug

The helm chart for the woodpecker-server supports the "existingClaim" option under persistentVolume, but the values.yml does not include the option.
The option is required if you want to use an existing PersistentVolumeClaim.

persistentVolume
  enabled: true
  ## Name of an existing PVC
  # existingClaim:

System Info

Helm chart 0.15.x

Additional context

No response

Validations

woodpecker-server should be a StatefulSet

Clear and concise description of the problem

In Kubernetes, a pod that needs a PersistentVolumeClaim object with accessModes ReadWriteOnce specification is deployed via a StatefulSet.

Suggested solution

Convert actual woodpecker-server Deployment into a StatefulSet object.

PS - Let me know if you want a PR.

Alternative

The ability to configure the accessModes of the PersistentVolumeClaim but I think it's a bad idea for example in a use case with the SSL and Let's Encrypt feature that stores the certificates in /var/lib/woodpecker/golang-autocert.
An other use case would be a accidental scale to 2 replicas with sqlite backend : keeping the accessModes to ReadWriteOnce will avoid simultaneous access to the db file by two woodpecker-server processes.

Additional context

No response

Validations

server: serviceAccount is not needed

Only the agent needs a serviceAccount to link the specific RBAC.
Well, the serviceAccount of the server component can be disabled by default.

Secret generation causes issues for GitOps approach

After the latest upgrade to 1.1 charts I got can't setup store: could not open datastore: no value matched key

I follow a GitOps approach (using ArgoCD) and never have secrets in a values.yaml by using SealedSecrets operator currently.

When I upgraded from the 1.0.x chart to the 1.1. Chart, the secret generated by the SealedSecrets operator was overwritten, caused by #144.

I now had to set the secrets for agent and server to both:

secrets: {}

so the SealedSecrets operator was able to create the decrypted secrets.

This secret generation wasn't as obvious as I thought because I missed or, to be more precise, dismissed it in the changelog because I had set createSecret: false previously, thinking that hadn't been changed.

I don't know, but the current solution will cause some issues for users using argocd or fluxcd with secret solutions like vault, sealed-secrets or sops (often in combination with fluxcd). Maybe this should be documented somehow?

OCI Chart Not Published

The docs reference the chart being published as an OCI artifact: oci://registry-1.docker.io/woodpeckerci/helm but there's nothing there at the moment: https://hub.docker.com/r/woodpeckerci/helm/tags

From what I can see in .woodpecker/release.yaml, there's no helm push.

Here's an example from a GHA workflow: https://github.com/prometheus-community/helm-charts/blob/ef3fe76dbb88ce712eaae22e8e14a868d41df061/.github/workflows/release.yaml#L53-L61

shopt -s nullglob
for pkg in .cr-release-packages/*; do
  if [ -z "${pkg:-}" ]; then
    break
  fi
  helm push "${pkg}" "oci://ghcr.io/${GITHUB_REPOSITORY_OWNER}/charts"
done

"Meta" helm chart for easy beginner setup

Clear and concise description of the problem

  • When showing off woodpecker it would be nice to say it's only one helm install away (possibly even with a optional Gitea instance).

  • especially when working with "custom" tags (i.e. pull_XXXX, next- once that becomes a thing), keeping the agent and server in sync is benificcial if not required.

Suggested solution

  • Add a "meta" chart (chart of charts) referencing woodpecker-server and woodpecker-agent
  • Provide either a values.yaml via chart with common "ready to go" settings or write docs referencing or including a "demo" or "production-light" setup.
  • Write a Administrator Getting started

Alternative

Just do some nice docs helping towards an easy setup for non-versed users (be it Kubernetes, helm or woodpecker itself)

Additional context

It's just a nice to have, but would probably be a good addition after #4 is done.

Validations

Helm template error when upgrading to 1.2.x

Steps to reproduce

  • Use version 1.2.0 or higher from https://woodpecker-ci.org/
  • Set the woopecker.agent.prersistence.enabled to true
  • Configure a volume in woodpecker.agent.extraVolumes and woodpecker.agent.extraVolumeMounts

Descripition

When woodpecker.agent.extraVolumes and woodpecker.agent.extraVolumeMounts are configured, and woodpecker.agent.persistence is set to true, an error occurs when parsing the helm templates.

Error: YAML parse error on woodpecker/charts/woodpecker/charts/agent/templates/statefulset.yaml: error converting YAML to JSON: yaml: line 42: did not find expected '-' indicator
helm.go:84: [debug] error converting YAML to JSON: yaml: line 42: did not find expected '-' indicator
YAML parse error on woodpecker/charts/woodpecker/charts/agent/templates/statefulset.yaml

My agent config

woodpecker:    
  agent:    
    enabled: true    
    replicaCount: 2    
    image:    
      tag: v2.4.1-alpine    
    persistence:    
      enabled: true    
    extraVolumes:    
      - name: ca-pemstore    
        configMap:    
          name: ca-pemstore    
    extraVolumeMounts:    
      - name: ca-pemstore    
        mountPath: /etc/ssl/certs/CustomCA.pem    
        subPath: CustomCA.pem    
    extraSecretNamesForEnvFrom:    
      - agent-env

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

Awaiting Schedule

These updates are awaiting their schedule. Click on a checkbox to get an update now.

  • chore(deps): lock file maintenance

Detected dependencies

helm-values
charts/woodpecker/charts/agent/values.yaml
charts/woodpecker/charts/server/values.yaml
charts/woodpecker/values.yaml
helmv3
charts/woodpecker/Chart.yaml
regex
.woodpecker/test.yml
  • helm-unittest/helm-unittest v0.5.2
charts/woodpecker/Chart.yaml
  • woodpecker-ci/woodpecker 2.7.0
charts/woodpecker/charts/agent/Chart.yaml
  • woodpecker-ci/woodpecker 2.7.0
charts/woodpecker/charts/server/Chart.yaml
  • woodpecker-ci/woodpecker 2.7.0
woodpecker
.woodpecker/release-helper.yml
  • woodpeckerci/plugin-ready-release-go 1.2.0
.woodpecker/release.yml
  • quay.io/helmpack/chart-releaser v1.6.1
  • quay.io/helmpack/chart-releaser v1.6.1
  • jnorwood/helm-docs v1.14.2
  • appleboy/drone-git-push 1.1.0
.woodpecker/test.yml
  • alpine/helm 3.15.4
  • alpine/helm 3.15.4
  • quay.io/helmpack/chart-testing v3.11.0

  • Check this box to trigger a request for Renovate to run again on this repository

clone step using RWX volume takes long time to complete

I used the helm chart to deploy woodpecker server and agent.

With `WOODPECKER_BACKEND_K8S_STORAGE_CLASS: "nfs-rwx-storage"`` which is RWX storage class, the clone step unusually takes long time (~2m) to complete. When I disable RWX, it take less than 20s to get completed.

image

I can see it logs, volume mount is immediateltly available to the build pod:

image

Has any one experienced this? Any way to disable volume mounting?

Default version v0.15.9 doesn't appear to actually support kubernetes

Hello there. I am experimenting with this project for a friend who wants to use it in their business, and might end up using it myself, we'll see.

During the deployment, I encountered the following issues:

  1. As mentioned by #45, the helm chart I was supposed to use changed, but nobody told me in the docs. So I was trying to deploy an old, buggy helm chart, which was not a good time.
  2. I got a significant number of complaints from woodpecker-agent about how kubernetes was not a valid backend. In other words, your chart is deploying an invalid configuration, which is an...interesting choice? I would make this a bug under woodpecker, if it's a regression, but further research, from what I can tell, shows that the kubernetes integration is new- even though this repo has existed since 2021- and this might be expected behavior. It is fixed in the latest version of next, as well. But there are no notes about this, and the repo isn't using a version that actually supports kubernetes, which is disappointing.

Even if you don't fix this, at least leave this issue open for people to find quick answers to their problem if they encounter the same one.

I got it working here:
HelmRelease
Permalink

Granted, I haven't tested any pipelines, but everything turned on and the webpage loaded, so it probably works.

Thanks for the project!

Can't upgrade to 1.1.0 using helm controller

Hi,

congrats to https://github.com/woodpecker-ci/helm/releases/tag/1.1.0 release. Sadly I have issues updating it in my k3s system. I use k3s helm-controller, however following definition fails

apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
  name: woodpecker
  namespace: woodpecker
spec:
  version: 1.1.0
  chart: woodpecker
  repo: https://woodpecker-ci.org/
  targetNamespace: woodpecker

When running helm locally all I see is 1.0.3

$ helm repo list
NAME                    URL                                    
...
woodpecker              https://woodpecker-ci.org/ 
$ helm repo update 
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "woodpecker" chart repository
Update Complete. โŽˆHappy Helming!โŽˆ
$ helm search repo woodpecker --versions
NAME                    CHART VERSION   APP VERSION     DESCRIPTION                   
woodpecker/woodpecker   1.0.3           2.1.1           A Helm chart for Woodpecker CI
woodpecker/woodpecker   1.0.2           2.1.0           A Helm chart for Woodpecker CI
...

BTW: noticed the artifactory lists 1.0.3 as the last version too https://artifacthub.io/api/v1/packages/helm/woodpecker-ci/woodpecker/feed/rss

missing secret

Deploying from tf. It would be nice if the secret creation was optional, so the chart worked out of the box. I think the secret references in the .values should likely just be a comment.

Alternatively it could be optionally created:

{{- if .Values.createSecret }}
apiVersion: v1
kind: Secret
metadata:
  name: {{ .Values.secretName }}
type: Opaque
{{- end }}

Could also be a pre chart hook to create if not present:

apiVersion: batch/v1
kind: Job
metadata:
  name: pre-install-secret-check
  annotations:
    "helm.sh/hook": pre-install
spec:
  template:
    spec:
      containers:
      - name: check-secret
        image: appropriate-image
        command: ["/bin/sh"]
        args: ["-c", "upsert_secret.sh"]
      restartPolicy: OnFailure

Kubernetes default helm deployment configures github

Component

server

Describe the bug

The default kubernetes helm deployment enables the github integration from woodpecker. But woodpecker can be used in scenarios without github integration. If someone does not want the github integration, it must be actively disabled.

My proposal is, to remove the github configs from the helm charts values.yml

env:
  WOODPECKER_GITHUB: true

extraSecretNamesForEnvFrom:
- woodpecker-github-client
- woodpecker-github-secret

With this change the documentation also needs to be adjusted.

System Info

Helm chart for 0.15.x

Additional context

No response

Validations

Ability to customize the deployment of the agent

Since the last PR on the server, it is possible to configure the labels, annotations and the number of history of the statefullSet. The goal is to standardize the agent deployment.

PS - I will do a PR in a few days.

[question] pvc claim fails on k3s

I am using the helm chart and the docs to get started on k3s. Following the docs, I have changed server.persistentVolume.storageClass to local-path. The server is stuck at pending with 1 pod has unbound immediate PersistentVolumeClaims. I have futilely tried to create a persistent volume manually. Admittedly, I am pretty new to the k8s ecosystem, and I might have missed some simple step.

myvalues.yml
agent:
  # -- The number of replicas for the deployment
  replicaCount: 1

  image:
    # -- Overrides the image tag whose default is the chart appVersion.
    tag: ""

  env:
    # -- Add the environment variables for the agent component
    WOODPECKER_BACKEND_K8S_VOLUME_SIZE: 5G

server:
  image:
    tag: ""

  # -- Add environment variables for the server component
  env:
    WOODPECKER_ADMIN: "aphilas"
    WOODPECKER_HOST: "https://ci.example.org"
    WOODPECKER_GITHUB: "true"

  persistentVolume:
    # -- Defines the size of the persistent volume
    size: 5Gi
    # -- Defines the storageClass of the persistent volume
    storageClass: "local-path"
    # -- Defines the path where the volume should be mounted
    # mountPath: "/var/lib/woodpecker"
    mountPath: null

  service:
    # -- The port of the service
    port: &servicePort 80

  ingress:
    # -- Enable the ingress for the server component
    enabled: true
    # -- Add annotations to the ingress
    annotations:
      kubernetes.io/ingress.class: traefik
      cert-manager.io/cluster-issuer: letsencrypt-prod
      traefik.ingress.kubernetes.io/router.middlewares: default-redirect-https@kubernetescrd

    hosts:
      - host: ci.example.org
        paths:
          - path: /
            backend:
              serviceName: ci.example.org
              servicePort: *servicePort
    tls:
      - secretName: woodpecker-tls
        hosts:
          - ci.example.org
helm upgrade --install -f myvalues.yml woodpecker-server woodpecker/woodpecker-server

Original message on Matrix

Default service account has no permissions to create PVC

With NEXT (woodpecker-agent@sha256:6a625b1a1b40f7f840a3a4da7230f45735232fa14b45402bcad10b27a7faf8c3) ) I get on a fresh deployment

woodpecker-agent 10:43PM WRN cancel signal received error="rpc error: code = Unknown desc = Step finished with exitcode 1, persistentvolumeclaims is forbidden:  โ”‚
โ”‚ User \"system:serviceaccount:woodpecker-agent:woodpecker-agent\" cannot create resource \"persistentvolumeclaims\" in API group \"\" in the namespace \"woodpeck โ”‚
โ”‚ er-agent\""

with

env:
  WOODPECKER_SERVER: woodpecker-server.woodpecker-server.svc.cluster.local:9000
  WOODPECKER_BACKEND_K8S_NAMESPACE: woodpecker-agent
  WOODPECKER_BACKEND_K8S_STORAGE_CLASS: ebs-sc
  WOODPECKER_BACKEND: kubernetes
  WOODPECKER_BACKEND_K8S_STORAGE_RWX: false

I'd expect here that the serviceaccount created would have the required scopes to create the PVC.

Agent pod crashes with "agent could not auth: please provide an auth token"

Hi, the problem as as the title states. The Helm chart version is 1.5.0, the latest as of this moment.

The agent pod crashes. Upon inspecting the the logs of the agent pod state:

{"level":"info","time":"2024-07-18T17:33:24Z","message":"log level: info"}
{"level":"info","time":"2024-07-18T17:33:24Z","message":"no agent config found at '/etc/woodpecker/agent.conf', start with defaults"}
{"level":"fatal","error":"rpc error: code = Unknown desc = agent could not auth: please provide a token","time":"2024-07-18T17:33:24Z","message":"error running agent"}

It looks like the secret isn't being loaded.

The following is my Helm for the agent section of values.yml:

agent:
  enabled: true
  env:
    WOODPECKER_BACKEND_K8S_NAMESPACE: [...]
    WOODPECKER_BACKEND_K8S_STORAGE_CLASS: "standard"
  persistence:
    storageClass: "standard"
    size: "10Gi"
  replicaCount: 1

Looking at the environment variable section of kubectl describeing the agent:

    Environment Variables from:
      woodpecker-secret  Secret  Optional: false
    Environment:
      WOODPECKER_BACKEND:                      kubernetes
      WOODPECKER_BACKEND_K8S_NAMESPACE:        [...]
      WOODPECKER_BACKEND_K8S_POD_ANNOTATIONS:
      WOODPECKER_BACKEND_K8S_POD_LABELS:
      WOODPECKER_BACKEND_K8S_STORAGE_CLASS:    standard
      WOODPECKER_BACKEND_K8S_STORAGE_RWX:      true
      WOODPECKER_BACKEND_K8S_VOLUME_SIZE:      10G
      WOODPECKER_CONNECT_RETRY_COUNT:          1

I believe there should be a WOODPECKER_AGENT_SECRET environment variable, but I don't see one.

I may be configuring something incorrectly. Any help would be greatly appreciated.

List of registered agents grows indefinitelly

Agent configuration https://woodpecker-ci.org/docs/administration/agent-config#using-system-token says

the agent registers to the server and store its unique id in a config file

Sadly this does not work well in k8s. The installed agents complaints about and as a consequence the list of registered agents grows all the time and I need to manually clean it up from time to time.

{"level":"error","error":"open /etc/woodpecker/agent.conf: no such file or directory","time":"2024-01-13T01:28:47Z","message":"could not persist agent config at '/etc/woodpecker/agent.conf'"}

Is it possible to address that somehow in helm? I am not an expert, so I have no idea if this is possible to do or if that require code changes.

`<version>` placeholders are incompatible with helm-git plugin

Must be a valid semantic version tag like 1.0.0.

helm repo add woodpecker-agent-git "git+https://github.com/woodpecker-ci/helm@woodpecker-agent?ref=main&sparse=1"
./helm-git-plugin.sh: line 113: .git/info/sparse-checkout: No such file or directory
Error: validation: chart.metadata.version "<version>" is invalid
Error: looks like "git+https://github.com/woodpecker-ci/helm@woodpecker-agent?ref=main&sparse=1" is not a valid chart repository or cannot be reached: plugin "helm-git" exited with error

Ability to add additional volumes to the server

I'm trying to make a container registry config available as described here. The easiest way I can see to do that would be to mount a Secret containing the docker config file and then set the WOODPECKER_DOCKER_CONFIG to match the path where I mounted the file. However, I can't seem to find a good way to mount the config file in the server container.

Maybe we could add extraVolumeMounts: and extraVolumes: to the server values?

overwrite nodeSelector for jobs

Pod that should execute a job got
Node-Selectors: kubernetes.io/arch=amd64

while nodes in GKE with Kubernetes version: v1.26.6-gke.1700 are labeled beta.kubernetes.io/arch=amd64

.woodpecker.yml

pipeline:
  dummy-job:
    image: busybox
    commands:
      - echo "Dummy pipeline"
    backend_options:
      kubernetes:
        nodeSelector:
          beta.kubernetes.io/arch: amd64

values.yaml

    agent:
      image:
        tag: "v1.0.0"
      env:
        WOODPECKER_AGENT_SECRET: "aaaaaaa"
        WOODPECKER_SERVER: "woodpecker-server.woodpecker.svc.cluster.local:9000"
        WOODPECKER_BACKEND: kubernetes
        WOODPECKER_BACKEND_K8S_NAMESPACE: woodpecker
      nodeSelector: 
        beta.kubernetes.io/arch: amd64

In the documentation is mentioned:

Labels defined here will be appended to a list already containing "kubernetes.io/arch". By default the pod will use "kubernetes.io/arch" inferred from top-level "platform" setting which is deducted from the agents' environment variable CI_SYSTEM_PLATFORM. To overwrite this, you need to specify this label in the nodeSelector section.

The question is how to remove that default label, and add beta.kubernetes.io/arch=amd64

Documentation for installing current release?

I'm struggling a bit to understand the proper method to install this chart. From the docs it seems like we should install from the helm repo at https://woodpecker-ci.org/. The "next" branch docs don't give specific install instructions (they just link to this repo). But, this is what I'm seeing:

$ helm repo add woodpecker https://woodpecker-ci.org
"woodpecker" has been added to your repositories

$ helm search repo woodpecker
NAME                    CHART VERSION   APP VERSION     DESCRIPTION
woodpecker/woodpecker   0.1.5           v0.15.9         A Helm chart for Woodpecker CI

It looks like the most recent version available is 0.1.5, but on GitHub I see a 0.3.0 release from a couple of days ago. Is there another repo where the more recent releases are available? I apologize for the basic question. I'm still fairly new to helm, and I might be missing something very obvious.

This might be related to #45, but I'm not sure.

Undocumented chart in helm repo

There appears to be a chart with an older chart version (but newer Woodpecker version?) in the repo. Probably best to remove it to avoid confusion.

image

HelmRepository not updated

the current version over woodpecker-server has no stateful-set yet.

does the CI-Pipeline works like expacted?

Bad indent setting for extraVolumes?

This is a follow up from #97. In my testing, it looks like the extra volume mounts are added at the wrong indent level which breaks the YAML and the entire chart. If I run helm template ... I get YAML that looks like this:

# Source: woodpecker/charts/server/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: woodpecker-server
spec:
  serviceName: woodpecker-server-headless
  template:
    spec:
      serviceAccountName: default
      securityContext:
        {}
      containers:
        - name: server
          securityContext:
            {}
          image: "docker.io/woodpeckerci/woodpecker-server:next"
          imagePullPolicy: IfNotPresent
          resources:
            {}
          volumeMounts:
          - name: data
            mountPath: /var/lib/woodpecker
            - mountPath: /etc/registry
              name: reg-cred
              readOnly: true
          env:
            - name: ...
          

I removed some irrelevant parts for clarity. You can see that the mountPath stanza is indented a bit too far.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.