Code Monkey home page Code Monkey logo

charts's People

Contributors

abhilashjoseph avatar alexejliebenthal avatar anitabee avatar betermieux avatar cablespaghetti avatar chrissng avatar danhaywood avatar danifernandezs avatar dmakeroam avatar dmpe avatar fr33ky avatar gbonnefille avatar guillomep avatar hamza3202 avatar hoepfnerj avatar homopatrol avatar jaygridley avatar js-timbirkett avatar keiransteele avatar maxbraun avatar mohamedz3kri avatar mtcolman avatar philippe-bailer-ck avatar rjkernick avatar samueltbrown avatar seansund avatar tigran-a avatar tuunit avatar tylerauerbeck avatar woznihack avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

charts's Issues

[sonarqube] init container fails to chmod directories

After I hacked the template to include the certs mount, the init container now fails on the chmod:

drwxr-xr-x    7 root     root            73 Apr 11 02:39 .
drwxr-xr-x    3 root     root            23 Apr 11 02:39 ..
drwxrwsr-x    2 root     999              6 Apr 11 02:26 certs
drwxrwxr-x    2 999      999              6 Apr 11 01:48 data
drwxr-xr-x    4 root     root            38 Apr 11 02:39 extensions
drwxrwxr-x    2 999      999             37 Apr 11 01:48 logs
drwxrwxr-x    4 999      999             57 Apr 11 02:13 temp
chown: /opt/sonarqube/temp/conf/es/elasticsearch.yml: Operation not permitted
chown: /opt/sonarqube/temp/conf/es/jvm.options: Operation not permitted
chown: /opt/sonarqube/temp/conf/es/log4j2.properties: Operation not permitted
chown: /opt/sonarqube/temp/conf/es/elasticsearch.keystore: Operation not permitted
chown: /opt/sonarqube/temp/conf/es: Operation not permitted
chown: /opt/sonarqube/temp/conf/es: Operation not permitted
chown: /opt/sonarqube/temp/conf: Operation not permitted
chown: /opt/sonarqube/temp/conf: Operation not permitted
chown: /opt/sonarqube/temp/jna-3506402: Operation not permitted
chown: /opt/sonarqube/temp/jna-3506402: Operation not permitted
chown: /opt/sonarqube/certs: Operation not permitted
chown: /opt/sonarqube/certs: Operation not permitted

Also, the chmod is hardcoded to 999:999 which is probably bad if I use a custom security context.

[Nexus] Deployment For AWS EKS

Hi, I had some success deploying SonaType Nexus onto EKS, but it also does not appear everything is working correctly, but I am having some trouble figuring out why.

Here is my values.yaml

nexus:
  imageTag: latest

nexusProxy:
  enabled: true
  imageTag: latest
  env:
    nexusDockerHost: <some-dns>
    nexusHttpHost: <some-dns>

persistence:
  enabled: true
  storageClass: efs

ingress:
  enabled: true
  annotations:
    alb.ingress.kubernetes.io/backend-protocol: HTTP
    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:<some-id>:certificate/<some-id>
    alb.ingress.kubernetes.io/healthcheck-path: /
    alb.ingress.kubernetes.io/security-groups: <some-group>
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80},{"HTTPS": 443}]'
    alb.ingress.kubernetes.io/scheme: internal
    alb.ingress.kubernetes.io/target-type: ip
    kubernetes.io/ingress.class: alb

When I describe the pod, this is what I get:

Name:           nexus-sonatype-nexus-7b7b49c468-kpb4c
Namespace:      nexus-test
Priority:       0
Node:           ip-<some-ip>.us-west-2.compute.internal/<some-ip>
Start Time:     Wed, 08 Apr 2020 10:35:27 -0700
Labels:         app=sonatype-nexus
                pod-template-hash=7b7b49c468
                release=nexus
Annotations:    kubernetes.io/psp: eks.privileged
Status:         Running
IP:             <some-ip>
IPs:            <none>
Controlled By:  ReplicaSet/nexus-sonatype-nexus-7b7b49c468
Containers:
  nexus:
    Container ID:   docker://0f60622efaf1485f2c8147b6d6b0fe936808b6a9b4d4f9abb1954eee3f9320bc
    Image:          sonatype/nexus3:latest
    Image ID:       docker-pullable://sonatype/nexus3@sha256:81d182285d279081e80e74dbd13cb544fdf4255efadd61321436a577f56b87ad
    Ports:          5003/TCP, 8081/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Running
      Started:      Wed, 08 Apr 2020 10:35:33 -0700
    Ready:          False
    Restart Count:  0
    Liveness:       http-get http://:8081/ delay=30s timeout=1s period=30s #success=1 #failure=6
    Readiness:      http-get http://:8081/ delay=30s timeout=1s period=30s #success=1 #failure=6
    Environment:
      install4jAddVmParams:           -Xms1200M -Xmx1200M -XX:MaxDirectMemorySize=2G -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap
      NEXUS_SECURITY_RANDOMPASSWORD:  false
    Mounts:
      /nexus-data from nexus-sonatype-nexus-data (rw)
      /nexus-data/backup from nexus-sonatype-nexus-backup (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from nexus-sonatype-nexus-token-4qb8g (ro)
  nexus-proxy:
    Container ID:   docker://a8f89b3035497a16d676c2b5cec838664c2daaac0e695aa64f03e7086540f139
    Image:          quay.io/travelaudience/docker-nexus-proxy:latest
    Image ID:       docker-pullable://quay.io/travelaudience/docker-nexus-proxy@sha256:122f798c8b7b7b101ef744f53887c0b389571605f2101e20abbd47f57b2b422b
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Wed, 08 Apr 2020 10:35:34 -0700
    Ready:          True
    Restart Count:  0
    Environment:
      ALLOWED_USER_AGENTS_ON_ROOT_REGEX:  GoogleHC
      CLOUD_IAM_AUTH_ENABLED:             false
      BIND_PORT:                          8080
      ENFORCE_HTTPS:                      false
      NEXUS_DOCKER_HOST:                  <some-dns>
      NEXUS_HTTP_HOST:                    <some-dns>
      UPSTREAM_DOCKER_PORT:               5003
      UPSTREAM_HTTP_PORT:                 8081
      UPSTREAM_HOST:                      localhost
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from nexus-sonatype-nexus-token-4qb8g (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  nexus-sonatype-nexus-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  nexus-sonatype-nexus-data
    ReadOnly:   false
  nexus-sonatype-nexus-backup:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  nexus-sonatype-nexus-backup
    ReadOnly:   false
  nexus-sonatype-nexus-token-4qb8g:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  nexus-sonatype-nexus-token-4qb8g
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age                From                                                 Message
  ----     ------                  ----               ----                                                 -------
  Warning  FailedScheduling        58s (x3 over 66s)  default-scheduler                                    pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
  Normal   Scheduled               53s                default-scheduler                                    Successfully assigned nexus-test/nexus-sonatype-nexus-7b7b49c468-kpb4c to ip-<some-ip>.us-west-2.compute.internal
  Normal   SuccessfulAttachVolume  52s                attachdetach-controller                              AttachVolume.Attach succeeded for volume "pvc-4e09ef05-79bf-11ea-8ad6-0ad93b93b9da"
  Normal   Pulled                  47s                kubelet, ip-<some-ip>.us-west-2.compute.internal  Container image "sonatype/nexus3:latest" already present on machine
  Normal   Created                 47s                kubelet, ip-<some-ip>.us-west-2.compute.internal  Created container nexus
  Normal   Started                 47s                kubelet, ip-<some-ip>.us-west-2.compute.internal  Started container nexus
  Normal   Pulled                  47s                kubelet, ip-<some-ip>.us-west-2.compute.internal  Container image "quay.io/travelaudience/docker-nexus-proxy:latest" already present on machine
  Normal   Created                 46s                kubelet, ip-<some-ip>.us-west-2.compute.internal  Created container nexus-proxy
  Normal   Started                 46s                kubelet, ip-<some-ip>.us-west-2.compute.internal  Started container nexus-proxy
  Warning  Unhealthy               13s                kubelet, ip-<some-ip>.us-west-2.compute.internal  Readiness probe failed: Get http://<some-ip>:8081/: dial tcp <some-ip>:8081: connect: connection refused
  Warning  Unhealthy               13s                kubelet, ip-<some-ip>.us-west-2.compute.internal  Liveness probe failed: Get http://<some-ip>:8081/: dial tcp <some-ip>:8081: connect: connection refused

The infrastructure looks like it was provisioned correctly, but none of the resources actually load on the page. It looks like an ingress controller configuration issue? What do you guys think?
nexus

[sonarqube] native plugins hidden by volume mounts

In my deployment, the native plugins (those packaged inside the original container) are hidden by the volume mount. As a consequence, my SonarQube does not support any language.

What did I missed?

Should I declare ALL these plugins in the plugin list to install?

[sonarqube] install-plugins init container should fail if plugin download fails

The sonarqube pod was automatically restated and the install-plugins init container failed to download the plugins but exited with successful status leading to a non-functional sonarqube instance.

It should abort if any of the downloads are unsucessful so that the pod gets restarted.

From the init-container log:
--2020-03-30 07:08:57-- https://binaries.sonarsource.com/Distribution/sonar-python-plugin/sonar-python-plugin-2.7.0.5975.jar --2020-03-30 07:08:57-- https://binaries.sonarsource.com/Distribution/sonar-typescript-plugin/sonar-typescript-plugin-2.1.0.4359.jar --2020-03-30 07:08:57-- https://binaries.sonarsource.com/Distribution/sonar-javascript-plugin/sonar-javascript-plugin-6.2.0.12043.jar --2020-03-30 07:08:57-- https://binaries.sonarsource.com/Distribution/sonar-scm-git-plugin/sonar-scm-git-plugin-1.11.0.11.jar Resolving binaries.sonarsource.com... Resolving binaries.sonarsource.com... Resolving binaries.sonarsource.com... Resolving binaries.sonarsource.com... failed: Try again. wget: unable to resolve host address 'binaries.sonarsource.com' failed: Try again. wget: unable to resolve host address 'binaries.sonarsource.com' failed: Try again. wget: unable to resolve host address 'binaries.sonarsource.com' failed: Try again. wget: unable to resolve host address 'binaries.sonarsource.com' / total 64K drwxr-xr-x 1 root root 4.0K Mar 30 07:08 . drwxr-xr-x 1 root root 4.0K Mar 30 07:08 .. -rwxr-xr-x 1 root root 0 Mar 30 07:08 .dockerenv drwxr-xr-x 2 root root 4.0K Jan 16 21:52 bin drwxr-xr-x 5 root root 360 Mar 30 07:08 dev drwxr-xr-x 1 root root 4.0K Mar 30 07:08 etc drwxr-xr-x 2 root root 4.0K Jan 16 21:52 home drwxr-xr-x 1 root root 4.0K Jan 16 21:52 lib drwxr-xr-x 5 root root 4.0K Jan 16 21:52 media drwxr-xr-x 2 root root 4.0K Jan 16 21:52 mnt drwxr-xr-x 1 root root 4.0K Mar 30 07:08 opt dr-xr-xr-x 230 root root 0 Mar 30 07:08 proc drwx------ 2 root root 4.0K Jan 16 21:52 root drwxr-xr-x 1 root root 4.0K Mar 30 07:08 run drwxr-xr-x 2 root root 4.0K Jan 16 21:52 sbin drwxr-xr-x 2 root root 4.0K Jan 16 21:52 srv dr-xr-xr-x 12 root root 0 Mar 30 07:06 sys drwxrwxrwt 1 root root 4.0K Mar 30 07:08 tmp drwxr-xr-x 1 root root 4.0K Jan 16 21:52 usr drwxr-xr-x 1 root root 4.0K Jan 16 21:52 var
While here:
What is the point of listing the files of the container's root directory? Did you intend to list the files in the download directory?
Also, what is the point of the progress bar spewage? Can you invoke wget with the --no-verbose flag?

sonarqube: /opt/sq/bin/run.sh: No such file or directory on version 8.2-community

When I install sonarqube version 8.2-community I am getting bellow error:

/tmp/scripts/copy_plugins.sh: line 14: /opt/sq/bin/run.sh: No such file or directory

Helm command:

helm install oteemocharts/sonarqube --name sonarqube \
--set image.tag="8.2-community" \
--set sonarqubeFolder="/opt/sq" \
--set postgresql.enabled=true

Checked helm list:

sonarqube     	1       	Mon Mar  2 16:14:25 2020	DEPLOYED	sonarqube-4.0.0     	7.9.2      	default

Even after defining the 8.2-community version during installation, it's installing 7.9.2 version.

[nexus] Missing Service Type in values.yaml

  type: {{ .Values.service.serviceType }}

is missing in values.yaml. Therefore when using "helm render" you get type: which would be undesirable.

# # To use an additional service, set enable to true
service:
  # name: additional-svc
  enabled: false
  labels: {}
  annotations: {}
  ports:
  - name: nexus-service
    targetPort: 80
    port: 80

charts/sonarqube external database issue

Hello,

I'm currently trying to setup SonarQube using an external MySQL database but when installing Sonar keep attempting to connect to PostgresSQL even when postgress.enabled and mysql.enabled are set to false.
This is the exception I'm getting:

2020.03.16 20:58:02 INFO  web[][o.sonar.db.Database] Create JDBC data source for jdbc:postgresql://%!s(<nil>):5432/sonarDB
2020.03.16 20:58:02 ERROR web[][o.s.s.p.PlatformImpl] Web server startup failed
java.lang.IllegalStateException: Fail to connect to database
        at org.sonar.db.DefaultDatabase.start(DefaultDatabase.java:87)
        at org.sonar.core.platform.StartableCloseableSafeLifecyleStrategy.start(StartableCloseableSafeLifecyleStrategy.java:40)
        at org.picocontainer.injectors.AbstractInjectionFactory$LifecycleAdapter.start(AbstractInjectionFactory.java:84)
        at org.picocontainer.behaviors.AbstractBehavior.start(AbstractBehavior.java:169)
        at org.picocontainer.behaviors.Stored$RealComponentLifecycle.start(Stored.java:132)
        at org.picocontainer.behaviors.Stored.start(Stored.java:110)
        at org.picocontainer.DefaultPicoContainer.potentiallyStartAdapter(DefaultPicoContainer.java:1016)
        at org.picocontainer.DefaultPicoContainer.startAdapters(DefaultPicoContainer.java:1009)
        at org.picocontainer.DefaultPicoContainer.start(DefaultPicoContainer.java:767)
        at org.sonar.core.platform.ComponentContainer.startComponents(ComponentContainer.java:135)
        at org.sonar.server.platform.platformlevel.PlatformLevel.start(PlatformLevel.java:90)
        at org.sonar.server.platform.platformlevel.PlatformLevel1.start(PlatformLevel1.java:164)
        at org.sonar.server.platform.PlatformImpl.start(PlatformImpl.java:213)
        at org.sonar.server.platform.PlatformImpl.startLevel1Container(PlatformImpl.java:172)
        at org.sonar.server.platform.PlatformImpl.init(PlatformImpl.java:86)
        at org.sonar.server.platform.web.PlatformServletContextListener.contextInitialized(PlatformServletContextListener.java:43)
        at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4770)
        at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5236)
        at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
        at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1423)
        at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1413)
        at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
        at java.base/java.lang.Thread.run(Unknown Source)
Caused by: java.lang.IllegalStateException: Can not connect to database. Please check connectivity and settings (see the properties prefixed by 'sonar.jdbc.').
        at org.sonar.db.DefaultDatabase.checkConnection(DefaultDatabase.java:119)
        at org.sonar.db.DefaultDatabase.start(DefaultDatabase.java:84)
        ... 24 common frames omitted
Caused by: java.sql.SQLException: Cannot create PoolableConnectionFactory (The connection attempt failed.)
        at org.apache.commons.dbcp2.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:669)
        at org.apache.commons.dbcp2.BasicDataSource.createDataSource(BasicDataSource.java:544)
        at org.apache.commons.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:753)
        at org.sonar.db.profiling.NullConnectionInterceptor.getConnection(NullConnectionInterceptor.java:31)
        at org.sonar.db.profiling.ProfiledDataSource.getConnection(ProfiledDataSource.java:317)
        at org.sonar.db.DefaultDatabase.checkConnection(DefaultDatabase.java:116)
        ... 25 common frames omitted
Caused by: org.postgresql.util.PSQLException: The connection attempt failed.
        at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:292)
        at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
        at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:195)
        at org.postgresql.Driver.makeConnection(Driver.java:458)
        at org.postgresql.Driver.connect(Driver.java:260)
        at org.apache.commons.dbcp2.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:55)
        at org.apache.commons.dbcp2.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:355)
        at org.apache.commons.dbcp2.BasicDataSource.validateConnectionFactory(BasicDataSource.java:115)
        at org.apache.commons.dbcp2.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:665)
        ... 30 common frames omitted
Caused by: java.net.UnknownHostException: %!s(<nil>)
        at java.base/java.net.AbstractPlainSocketImpl.connect(Unknown Source)
        at java.base/java.net.SocksSocketImpl.connect(Unknown Source)
        at java.base/java.net.Socket.connect(Unknown Source)
        at org.postgresql.core.PGStream.<init>(PGStream.java:75)
        at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:91)
        at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:192)
        ... 38 common frames omitted

Do I need to make another configuration?

I'm attaching my values.yaml

Thanks!
values.zip

[sonarqube] securityContext.runAsUser applies to all containers, not just sonarqube, meaning that I cannot deploy the chart

Currently, I have an issue with deploying this chart that I'm not sure how to resolve.
If I use the following values:

# Set security context for sonarqube pod
securityContext:
  fsGroup: 999
  runAsUser: 999

Then it fails to run chmod-volume-mounts:

[ahynes@mt02c1kub0001p charts]$ kubectl logs pod/sonarqube-sonarqube-9b8dc9795-gk7dg -c chmod-volume-mounts
mkdir: can't create directory '/opt/sonarqube/certs': Permission denied

If I use the following, instead:

# Set security context for sonarqube pod
securityContext:
  fsGroup: 999
  # runAsUser: 999

Then chmod-volume-mounts runs fine, but sonarqube fails to start as ElasticSearch cannot be ran as root. Ideally, I'd be able to run chmod-volume-mounts as root, and sonarqube as the service account.

Given this information, I'm not sure how anybody has got this chart running, so maybe I'm just doing something wrong? I am using persistence.enabled = true, if that counts for anything. Looking for advice/a fix.

Results of kubectl version here:

[ahynes@mt02c1kub0001p charts]$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:23:09Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:14:56Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

[sonarqube] Duplicate scheduling settings in deployment.yaml template

Hi!

First of all: Thanks for taking on the sonarqube chart!

I posted an issue in the old helm repository about the duplicate scheduling settings in the templates/deployment.yaml: helm/charts#19713
The issue still persists, so i would like to repeat it here, with updated line information.

Describe the bug
The deployment.yaml file contains duplicate entries of the elements nodeSelector, hostAliases, tolerations and affinity. The problem is that the first occurence between L124 and L139 of these elements is right in the middle of the initContainer array.

This leads to a failed helm install when using for example affinity and the plugin install init container.

As the elements in question have their correct occurence between L262 and L273 it should be save to simply remove the first, incorrect ones.

A workaround is to remove the node scheduling settings from the values.yaml.

Which chart:
stable/sonarqube 3.2.7

What happened:
When using affinity and the plugin install init container, the chart installation fails with the message:

Failed to install app sonarqube. Error: YAML parse error on sonarqube/templates/deployment.yaml: error converting YAML to JSON: yaml: line 97: did not find expected key

Line 97 in my case was the duplicate entry of affinity

How to reproduce it (as minimally and precisely as possible):
I used the following values (among others):

  plugins:
    install:
      - "https://github.com/SonarSource/sonar-ldap/releases/download/2.2-RC3/sonar-ldap-plugin-2.2.0.601.jar"
  
  affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 50
        preference:
          matchExpressions:
          - key: "node-role"
            operator: In
            values:
            - services

Rolling Update not working

I'm using a single node k8s with minikube,
when I do an upgrading operation, it just hangs and not really do upgrade, checked the log shows some log with elastic lock error.

[Nexus] Extra ingress for nexus-proxy

Scenario:

with given values.yml

nexusProxy:
  env:
    nexusHttpHost: nexus.example.com
ingress:
    enabled: true
    rules:
      - host: nexus-extra.example.com
        http:
          paths:
            - backend:
                serviceName: <release_name>-sonatype-nexus
                servicePort: 8080
              path: /

It's impossible to access nexus by nexus-extra.example.com:

This page is displayed - https://github.com/travelaudience/nexus-proxy/blob/master/src/main/resources/templates/invalid-host.hbs.

Expected Result

NXRM can be accessув by both host names - nexus.example.com and nexus-extra.example.com.

Workaround

Use nginx.ingress.kubernetes.io/upstream-vhost: nexus.example.com in ingress.annotations. NB: docker repository won't work after nginx.ingress.kubernetes.io/upstream-vhost is set.

nexusProxy:
  env:
    nexusHttpHost: nexus.example.com
ingress:
  annotations:
    nginx.ingress.kubernetes.io/upstream-vhost: nexus.example.com
  enabled: true
  rules:
    - host: nexus-extra.example.com
       http:
         paths:
           - backend:
              serviceName: <release_name>-sonatype-nexus
              servicePort: 8080
              path: /

charts/sonarqube: install-plugins.yaml w/o http/https proxy

ISSUE:
If you deploy sonarqube behind corporate MITM proxy it is impossible to download any plugins due to lack of customizable proxy environment variables in deployment.yaml#L103.

Thus wget is not able to resolve target host, therefore install-plugins initContainer silently fails without no plugin download.

NOTE:
After locally applied env for install-plugins initContainer I found that alpine:3.10.3 provided wget is not able to handle more than 5 redirections and it fails with: Too many redirects.

[Nexus] Unable to add keystore (using GCP IAM)

Hello,

I’m using the Terraform provider to deploy my Nexus instance.

My deployment without cloud IAM Auth is running well, but when I try to deploy it with IAM auth, I got the following error :

Error: template: sonatype-nexus/templates/proxy-ks-secret.yaml:11:53: executing "sonatype-nexus/templates/proxy-ks-secret.yaml" at <b64enc>: invalid value; expected string

I’m using filebase64("keystore.jceks") to pass my file to the values.yml, this is, in terraform, the same as doing cat keystore.jceks | base64 :

module "nexus" {
  source                           = "../../modules/nexus"
  nexus_docker_host                = var.nexus_docker_host
  nexus_http_host                  = var.nexus_http_host
  env                              = var.env
  gcp_region                       = module.naming.gcp_region
  cloud_iam_auth_enabled           = true
  client_id                        = var.client_id
  client_secret                    = var.client_secret
  organization_id                  = var.organization_id
  redirect_url                     = "https://${var.nexus_http_host}/oauth/callback"
  required_membership_verification = true
  keystore                         = filebase64("keystore.jceks")
  keystore_password                = var.keystore_password
}

I also tryed to hardcode the base64 value on my values.yml but I’ve the same issue.

Any tips ?

[sonarqube] the sonarqubeFolder parameter is ignored by the chmod-volume-mounts container

chmod-volume-mounts is hardcoded to operate on /opt/sonarqube/${1-%s\n}

Found this when trying to work around #56

When I sonarqubeFolder to ~/sonarqube:

chmod-volume-mounts' mounts

    Mounts:
      /opt/sonarqube/data from sonarqube (rw,path="data")
      /opt/sonarqube/extensions/downloads from sonarqube (rw,path="downloads")
      /opt/sonarqube/extensions/plugins from sonarqube (rw,path="plugins")
      /opt/sonarqube/extensions/plugins/tmp from sonarqube (rw,path="tmp")
      /opt/sonarqube/logs from sonarqube (rw,path="logs")
      /opt/sonarqube/temp from sonarqube (rw,path="temp")
      /var/run/secrets/kubernetes.io/serviceaccount from sonarqube-sonarqube-fork-token-qt8h4 (ro)

sonarqube's mounts:

    Mounts:
      /tmp from tmp-dir (rw)
      /tmp/scripts from copy-plugins (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from sonarqube-sonarqube-fork-token-qt8h4 (ro)
      ~/sonarqube/conf/ from config (rw)
      ~/sonarqube/data from sonarqube (rw,path="data")
      ~/sonarqube/extensions/downloads from sonarqube (rw,path="downloads")
      ~/sonarqube/extensions/plugins from sonarqube (rw,path="plugins")
      ~/sonarqube/extensions/plugins/tmp from sonarqube (rw,path="tmp")
      ~/sonarqube/logs from sonarqube (rw,path="logs")
      ~/sonarqube/temp from sonarqube (rw,path="temp")

The following lines are hardcoded also, causing a failure: https://github.com/Oteemo/charts/blob/master/charts/sonarqube/templates/deployment.yaml#L54

[Sonarqube] Add support for HTTPS Redirection

Currently, the Sonarqube helm chart does not support adding additional ingresses. Since this is the case, if someone wanted to add support for HTTPS redirect, one would need to edit the live deployment or host their own version of the Sonarqube helm chart. Solving this issue would allow additional inputs of Ingresses.

[SonarQube] Unable to perform Kubernetes rolling updates with EFS storage and ReadWriteMany flag. "Sonarqube pod crashing due node.max_local_storage_nodes ?"

Hello there and thanks for your effort with this chart,

I'm running Sonar image: 8.2-developer on top of EKS with persistence enabled and looks like there's quite bit of trouble wit ElasticSearch trying to start. Whenever I do a rolling update deployment I get the errors below.

Same thing happens when I scale up the replica with replicaCount: 3

However using deploymentStrategy: type: Recreate mitigate the error. I have a bit of suspicion that the upcoming pod gets into a bit of race condition with the one terminating.

Anyone else experiencing the same issue?

  1. #53

k logs -p sonarqube-staging-sonarqube-799f945695-dbrvn
2020.05.06 21:35:58 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
2020.05.06 21:35:58 INFO app[][o.s.a.es.EsSettings] Elasticsearch listening on /127.0.0.1:9001
2020.05.06 21:35:58 INFO app[][o.s.a.ProcessLauncherImpl] Launch process[[key='es', ipcIndex=1, logFilenamePrefix=es]] from [/opt/sonarqube/elasticsearch]: /opt/sonarqube/elasticsearch/bin/elasticsearch
2020.05.06 21:35:58 INFO app[][o.s.a.SchedulerImpl] Waiting for Elasticsearch to be up and running
2020.05.06 21:35:58 INFO app[][o.e.p.PluginsService] no modules loaded
2020.05.06 21:35:58 INFO app[][o.e.p.PluginsService] loaded plugin [org.elasticsearch.transport.Netty4Plugin]
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
2020.05.06 21:36:00 WARN es[][o.e.b.ElasticsearchUncaughtExceptionHandler] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalStateException: failed to obtain node locks, tried [[/opt/sonarqube/data/es6]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:163) ~[elasticsearch-6.8.4.jar:6.8.4]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150) ~[elasticsearch-6.8.4.jar:6.8.4]
at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-6.8.4.jar:6.8.4]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124) ~[elasticsearch-cli-6.8.4.jar:6.8.4]
at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-6.8.4.jar:6.8.4]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:116) ~[elasticsearch-6.8.4.jar:6.8.4]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:93) ~[elasticsearch-6.8.4.jar:6.8.4]
Caused by: java.lang.IllegalStateException: failed to obtain node locks, tried [[/opt/sonarqube/data/es6]] with lock id [0]; maybe these locations are not writable or multiple nodes were started without increasing [node.max_local_storage_nodes] (was [1])?
at org.elasticsearch.env.NodeEnvironment.(NodeEnvironment.java:300) ~[elasticsearch-6.8.4.jar:6.8.4]
at org.elasticsearch.node.Node.(Node.java:296) ~[elasticsearch-6.8.4.jar:6.8.4]
at org.elasticsearch.node.Node.(Node.java:266) ~[elasticsearch-6.8.4.jar:6.8.4]
at org.elasticsearch.bootstrap.Bootstrap$5.(Bootstrap.java:212) ~[elasticsearch-6.8.4.jar:6.8.4]
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:212) ~[elasticsearch-6.8.4.jar:6.8.4]
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:333) ~[elasticsearch-6.8.4.jar:6.8.4]
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) ~[elasticsearch-6.8.4.jar:6.8.4]
... 6 more
2020.05.06 21:36:00 WARN app[][o.s.a.p.AbstractManagedProcess] Process exited with exit value [es]: 1
2020.05.06 21:36:00 INFO app[][o.s.a.SchedulerImpl] Process[es] is stopped
2020.05.06 21:36:00 INFO app[][o.s.a.SchedulerImpl] SonarQube is stopped

nexus-proxy container error

Getting the below error when nexus-proxy starts.
Tried setting env var for the nexus-proxy container as per the below reference link but same error

nexusProxy:
  enabled: true
  env:
    disableFileCPResolving: true

Reference: eclipse-vertx/vert.x#1931
Exception` in thread "main" java.lang.IllegalStateException: Failed to create cache dir at io.vertx.core.impl.FileResolver.setupCacheDir(FileResolver.java:299) at io.vertx.core.impl.FileResolver.<init>(FileResolver.java:92) at io.vertx.core.impl.VertxImpl.<init>(VertxImpl.java:176) at io.vertx.core.impl.VertxImpl.<init>(VertxImpl.java:143) at io.vertx.core.impl.VertxImpl.<init>(VertxImpl.java:139) at io.vertx.core.impl.VertxFactoryImpl.vertx(VertxFactoryImpl.java:34) at io.vertx.core.Vertx.vertx(Vertx.java:80) at com.travelaudience.nexus.proxy.Main.main(Main.java:17)

[sonarqube] the init-sysctl container should only raise vm.max_map_count

The init-sysctl unconditionally sets vm.max_map_count to 262144 even if it was set higher on the host.

It should only set vm.max_map_count if it is lower than the required value:

`
--- a/charts/sonarqube/templates/deployment.yaml
+++ b/charts/sonarqube/templates/deployment.yaml
@@ -91,9 +91,9 @@ spec:
securityContext:
privileged: true
command:

  •      - sysctl
    
  •      - -w
    
  •      - vm.max_map_count=262144
    
  •      - /bin/sh
    
  •      - -c
    
  •      - 'if [[ "$(sysctl -n vm.max_map_count)" -lt 262144 ]]; then sysctl -w vm.max_map_count=262144; fi'
         {{- with .Values.env }}
         env:
           {{- . | toYaml | trim | nindent 12 }}
    

`

Unable to deploy sonarqube to air gapped cluster

I was trying to use this helm chart to deploy to an air gapped cluster (no internet access), but was unable to do so due to chmod-volume-mounts container being hard coded to use busybox:1.31. I need to get all images from our private repository. I had a look and it seems like this isn't the only hard coded image in the helm chart. Here's the full list:

  • chmod-volume-mounts uses busybox:1.31
  • ca-certs uses adoptopenjdk/openjdk11:alpine
  • test-framework uses dduportal/bats:0.4.0

I think that's all of them. I got stuck on the first one and didn't test if my deployment needs the rest of them, but if we are fixing this, we should fix it for all of them regardless.

From my point of view, there are two ways to fix this. We either provide individual configuration for each of them or we can do what the mongodb chart does: add a global.imageRegistry option. The mongodb approach is a bigger change to implement, but easier and faster to configure for the user. It's also something that could be added later as an improvement. They should be configurable individually regardless, so we might as well start with the first approach.

I don't mind submitting a PR, as I would have to make a custom helm chart anyway. I'd just like to know what naming you'd prefer for these options. You already have plugins.initContainerImage and plugins.initSysctlContainerImage. I don't see how the latter has anything to do with the plugins, but I don't mind using that scope if that's what you'd prefer. Might be the most simplistic option. In that case, here's my proposal:

  • plugins.initVolumesContainerImage
  • plugins.initCertsContainerImage
  • plugins.initTestContainerImage

Looks consistent with your existing options and the names are descriptive of what they do.

sonatype-nexus deploymentStrategy expects string

RollingUpdate deadlock on the persistent volume claim for nexus deployments so I went to change the deploymentStrategy to Recreate but this generates an invalid spec template.

templates/deployment-statefulset.yaml

strategy:
{{ toYaml .Values.deploymentStrategy | indent 4 }}
{{- end }}

This is wrong since strategy should actually be

strategy:
  type: <type>

You can replicate this by setting deploymentStrategy to either RollingUpdate or Recreate.

[Nexus] Nuget packages not showing in repository

Hi, I set the chart, downloaded the nuget package, it’s empty in the package list, but I can find this package through the search. What is the problem?

helm install nexus --set persistence.storageClass=openebs-jiva-default,persistence.storageSize=8Gi,ingress.enabled=false,nexusProxy.env.nexusDockerHost={domain},nexusProxy.env.nexusHttpHost={domain} oteemo/sonatype-nexus

Repository:
image

Search:
image

Package:
image

[sonatype-nexus] when install, shows warning message

Hello.

There's some warning message in install command output.

# helm install nexus-stage -n nexus-stage -f nexus-stage-values.yaml oteemocharts/sonatype-nexus
coalesce.go:199: warning: destination for data is a table. Ignoring non-table value <nil>

Thanks,

chart/sonarqube - support for SSL

This is my current values.yaml file:

ingress:
enabled: true
# Used to create an Ingress record.
hosts:
  - name: sonar.example.com
    # default paths for "/" and "/*" will be added
    path: /
    # If a different path is defined, that path and {path}/* will be added to the ingress resource
    # path: /sonarqube
    tls: true
    tlsHosts:
    - sonar.example.com
    tlsSecret: example-tls
annotations:
    kubernetes.io/ingress.class: nginx-internal
    #kubernetes.io/tls-acme: "true"
    #This property allows for reports up to a certain size to be uploaded to SonarQube
    nginx.ingress.kubernetes.io/proxy-body-size: "20m"

But, TLS is not working. Portal is running on port 80.

Default value of ingress

Hello !

In the sonatype-nexus chart's documentation, you can read that ingress is enabled by default but when you read the values.yaml, it is actually not. This is misleading when you deploy the chart and try to understand why there's no ingress.

[sonarqube] configuration of Ingress with two paths does not work with Traefik V2

Hi,

We use Traefick V2 as Ingress Controller and the configuration of two paths gives a 404 error when we try to access to SonarQube.

These paths are defined in the ingress.yml file:

    {{- $path := default "/" .path }}
    - host: {{ .name }}
      http:
        paths:
          - path: {{ $path }}
            backend:
              serviceName: {{ $serviceName }}
              servicePort: {{ $servicePort }}
          - path: {{ printf "%s/*" (trimSuffix "/" $path) }}
            backend:
              serviceName: {{ $serviceName }}
              servicePort: {{ $servicePort }}
    {{- end -}}

It works fine if we delete the second path.
We have this problem since we have migrated on the V2 of Traefik.

Anyway, is it possible to know why there is this double configuration of path?
Is it possible to remove it?

Thanks in advance.

chart/sonarqube - allow tests to be turned off

When the helm chart is rendered as a template then applied to a cluster, the pods defined in the tests folder are rendered as well. These pods fail because they run before the pod is ready and it looks like there is a problem with SonarQube in the cluster.

Allow a way for the helm chart (using helm 3) to be rendered to yaml without generating these test resources. i.e.

helm template sonarqube sonarqube --repo https://oteemo.github.io/charts

should not contain the tests.

Other charts have accomplished this by adding a enableTests flag

[sonatype-nexus] decouple backup container feature and backup PVC feature

Currently, when nexusBackup.enabled=false both the backup container and the PVC are disabled:

  • no backup container is created
  • backup storage (/nexus-data/backup) is configured as emptyDir

Some users, like me ;-), would appreciate a median approach, keeping backup data in a persistent storage. K8S + storageClass let us use a resilient file storage with external backup solutions.

Exception in thread "main" org.sonar.process.MessageException: Unsupported JDBC driver provider: mysql

Trying to spin up a Sonarqube instance and connect to external mysql db, and running into this error message. Wondering if this is related to the proxy issue? I am also behind a corporate proxy.

Exception in thread "main" org.sonar.process.MessageException: Unsupported JDBC driver provider: mysql

values.yaml:

database:
  type: "mysql"
postgresql:
  enabled: false
mysql:
  enabled: false
  mysqlUser: "redacted"
  mysqlServer: "redacted"
  mysqlPassword: "redacted"
  mysqlDatabase: "sonar"
  service:
    port: 3306```

Customizing postgresqlPassword

I'd like to pass in a custom postgresql.postgresqlPassword. The documentation at https://hub.helm.sh/charts/oteemo/sonarqube has "Customizing the chart" with an old link:
https://docs.helm.sh/using_helm/#customizing-the-chart-before-installing... I found the newer link here: https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing

If I use a command like helm install --values values.yml myNewSonarQube --set postgresql.postgresUser=mySonarUserTest,postgresql.postgresPassword=mySonarPassTest oteemo/sonarqube or helm upgrade --install --values values.yml myNewSonarQube --set postgresql.postgresUser=mySonarUserTest,postgresql.postgresPassword=mySonarPassTest oteemo/sonarqube it doesn't change the default username and password for postgresql.

[sonatype-nexus]: strategy key breaks the StatefulSet

Expected Behavior

I’m attempting to upgrade the image tag in my sonatype-nexus chart by making the change in the chart values and running helm install --upgrade

I expect the the image tag to get updated with no issues.

I’m using these versions:

  • Helm version: 3.1.1
  • Kubernetes version: 1.15.7
  • sonatype-nexus chart version: 1.16.3

Current Behavior

When I run the helm install --upgrade command I get this error:

UPGRADE FAILED: error validating "": error validating data: ValidationError(StatefulSet.spec): unknown field "strategy" in io.k8s.api.apps.v1.StatefulSetSpec

Possible Solution

It looks like in and after Kubernetes 1.7, StatefulSets use .spec.updateStrategy as the way of specifying either a strategy of OnDelete or RollingUpdate. RollingUpdate is the default strategy.

In the deployment-statefulset.yaml file, there is a strategy key which seems to be breaking my install: https://github.com/Oteemo/charts/blob/master/charts/sonatype-nexus/templates/deployment-statefulset.yaml#L28-L30

Perhaps this can be changed, or I need to do something differently with my StatefulSet values. Any advice is appreciated.

[sonarqube] sonarqube 8.3 results in crashloop

Using the chart for sonarqube 8.2-developer works as expected. However when trying 8.3-developer, it results in the following error:

sonarqube /opt/sonarqube/bin/run.sh: line 13: 1: unbound variable
sonarqube stream closed

Running the docker container via docker run sonarqube:8.3-developer appears to work as expected, so I suspect that this may be an incompatibility between the chart itself, and the recent change in SonarSource/docker-sonarqube in using the alpine-jdk base image.

I see that the helm chart has several settings in the values.yml for overiding images at https://github.com/Oteemo/charts/blob/master/charts/sonarqube/values.yaml#L169-L174 but I've yet to find the magic incantation to make this work.

Worth noting, this issue is also documented at SonarSource/docker-sonarqube#401

changes to nexus label causes pvc to be replaced

charts/charts/sonatype-nexus/templates/pvc.yaml

I was using helmfile to apply the nexus chart, sometime ago I find the night apply failed. Closer check shows it was caused by a replacment of pvc, which was in turn caused by a change in label.

since nexus.label was added to the pvc, any changes to the nexus.label, e.g. chart name, would cause pvc to be replaced, which i don't think would be an expected behaviour:

Failure started from 7-Apr, which i assume was caused by a bump from to 1.27.0. I am falling back to 1.26.6.

`
default, sonatype-nexus-data, PersistentVolumeClaim (v1) has changed:

Source: sonatype-nexus/templates/pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: sonatype-nexus-data
labels:
app: sonatype-nexus
fullname: sonatype-nexus

  • chart: sonatype-nexus-1.27.1
    
  • chart: sonatype-nexus-2.0.0
    release: sonatype-nexus
    heritage: Helm
    
    spec:
    accessModes:
    - "ReadWriteOnce"
    resources:
    requests:
    storage: "8Gi"
    Upgrading release=sonatype-nexus, chart=oteemocharts/sonatype-nexus
    FAILED RELEASES:
    NAME
    sonatype-nexus
    in ./helmfile.yaml: failed processing release sonatype-nexus: helm exited with status 1:
    Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists. Unable to continue with update: existing resource conflict: namespace: default, name: sonatype-nexus-backup, existing_kind: /v1, Kind=PersistentVolumeClaim, new_kind: /v1, Kind=PersistentVolumeClaim
    `

[Sonarqube] - plugins mangement strange behavior

Hello everybody,

helm version : 2
kubectl version : 1.16
sonarqube version : 8.2 developer edition

To deploy Sonarqube on Kubernetes, I'm using your chart because the last one is deprecated. But during my last maintainance, I was upgrading Sonarqube after building sonarqube 8.2 developer edition docker image, and i encounter a strange behavior. The upgrade was also providing a more complete plugins list. And after the upgrade, I saw that the default plugins were removed. Only my listed plugins were installed and in the /opt/sq/extensions/plugins/tmp folder.

I investigated a bit. When I deploy the docker image locally, I can see that the default plugins are well installed in extensions/plugins/*.jar.
So I investigated in the copy-plugins.yaml & install-plugins.yaml and just saw that same if we set the value deleteDefaultPlugins: false, all the default plugins are deleted because of the both "for" loop.

image

I executed in local the script with fake plugins, and we can well see that in the first loop it iterates in all the extensions/plugins/*.jar and the second one too. So everytime that the plugins targeted are the same, they are removed. But the both "for" loops iterates over all plugins so they will obviously match everyfile one by one. Only because of this condition :
image

So to conclude, the install-plugin.yaml part wget all plugins listed in extensions/plugins/tmp folder & then copy-plugins.yaml copy them in plugins/ but without keeping the default ones same with deleteDefaultPlugins: false.

Question
Was it wanted to use 2 plugins folder and store installed plugins in plugins/tmp before removing all the default plugins whatever we set in the deleteDefaultPlugins key ?

Ps
double "for" loop output here : https://pastebin.com/NYf76dxq

Unable to use corporate mirror

Hi,

In my company we use Artifactory to mirror external repositories and we have a problem to mirror Oteemo Charts repo. The problem are the "urls" tags in "index.yaml" that aren't prefixed by repo's URL ("https://oteemo.github.io/charts") but by "https://github.com/Oteemo/charts/releases/download/". As a result, the Helm Client tries to download from https://github.com/Oteemo/charts/releases/download/, which fails because the corporate proxy blocks it.

An example of a repo that is working fine is "https://kubernetes-charts.storage.googleapis.com/". All "urls" tags in "index.yaml" are prefixed by "https://kubernetes-charts.storage.googleapis.com/" (eg. https://kubernetes-charts.storage.googleapis.com/acs-engine-autoscaler-2.2.1.tgz) and Artifactory sucessfully rewrites it to our Artifactory's URL (eg. https://internal.artifactory/acs-engine-autoscaler-2.2.1.tgz).

Would it be possible to move the Charts "tgz" files to "https://oteemo.github.io/charts" (eg. https://oteemo.github.io/charts/sonarqube-4.2.0.tgz)?

Regards,

Rodrigo

[sonarqube] Sonarqube does not work with ingress

Hi,

I installed Sonarqube chart with the following command:
helm install sonar stable/sonarqube --set persistence.enabled=true --set persistence.existingClaim=sonar --set ingress.annotations."kubernetes.io/ingress.class"=traefik --set ingress.hosts[0].path=/sonarqube --set ingress.enabled=true

I have a Traefik ingress. When i access the app through Traefik LB IP, I get 404 on resource loading:
http://IP/js/vendors-main.m.d184ed05.chunk.js
http://IP/js/main.m.115b48b2.js
It seems a configuration is missing somewhere for link generation :(

[nexus] Forbidden: is immutable after creation except resources.requests for bound claims

helm upgrade sonatype-nexus oteemocharts/sonatype-nexus --set nexus.service.type=LoadBalancer gives me a failed status Error: UPGRADE FAILED: cannot patch "sonatype-nexus-data" with kind PersistentVolumeClaim: PersistentVolumeClaim "sonatype-nexus-data" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims even though the deployment is successful

Also, using loadBalancer gives a problem with nexusProxy.env.nexusHttpHost as it will first need to generate an external IP to set this value.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.