Code Monkey home page Code Monkey logo

helm3-charts's Introduction

Lint and Test Charts

Helm3 Charts for Sonatype Products

Nexus Repository 3 Helm charts are now being published to our Nexus Repository 3 Helm Repository.

Use the new repository to obtain the latest Nexus Repository 3 charts.

These charts are designed to work out of the box with minikube using both ingess and ingress dns addons.

The current releases have been tested on minikube v1.12.3 running k8s v1.18.3

User Documentation

See docs/index.md which is also https://sonatype.github.io/helm3-charts/

Contributing

See the contributing document for details.

For Sonatypers, note that external contributors must sign the CLA and the Dev-Ex team must verify this prior to accepting any PR.

Updating Charts

Charts for Nexus IQ and for NXRM can be updated in charts/ directories. The most common updates will be to use new application images and to bump chart versions for release.

There should likely be no reason to update anything in docs/ by hand.

Test a chart in a local k8s cluster (like minikube) by installing the local copy from within each charts directory:

helm install --generate-name ./

Packaging and Indexing for Release

Sonatype CI build will package, commit, and publish to the official helm repository.

Upon update of the charts/, run build.sh from here in the project root to create tgz packages of the latest chart changes and regenerate the index.yaml file to the docs/ directory which is the root of the repo site.

The build process requires Helm 3.

Manually Testing the Helm Charts

To test Helm Charts locally you will need to follow the next steps:

  1. Install docker, helm, kubectl, and minikube, if you don't already have it on your local workstation.
    • You could also use docker with k8s enabled instead of minikube. You don't need both.
  2. Start up minikube: minikube start
  3. Confirm minikube is up and running: minikube status
  4. List the existing pods in the cluster: kubectl get pods (There should not be anything listed at this point.)
  5. Install the helm chart in any of these ways:
    • From a copy of the source: helm install iq {path/to/your/helm3-charts}/charts/nexus-iq --wait
    • From our production online repo: Add our helm repo locally as instructed at https://sonatype.github.io/helm3-charts/
  6. List installed servers with helm: helm list
  7. Watch the server start in kubernetes by running: kubectl get pods
  8. Use the pod name you get from last command to follow the console logs: kubectl logs -f iq-nexus-iq-server-xxx
  9. Confirm expected version numbers in those logs.
  10. Forward a localhost port to a port on the running pod: kubectl port-forward iq-nexus-iq-server-xxx 8070
  11. Connect and check that your fresh new server is successfully running: http://localhost:8070/
  12. Uninstall the server with helm: helm delete iq
  13. Confirm it's gone: helm list && kubectl get pods
  14. Shutdown minikube: minikube stop

Running Lint

Helm's Lint command will highlight formatting problems in the charts that need to be corrected.

helm lint charts/nexus-iq charts/nexus-repository-manager

Running Unit Tests

To unit test the helm charts you can follow the next steps:

  1. Install the unittest plugin for Helm: https://github.com/quintush/helm-unittest
  2. Run the tests for each individual chart:
    • cd charts/nexus-iq; helm unittest -3 -t junit -o test-output.xml .
    • cd charts/nexus-repository-manager; helm unittest -3 -t junit -o test-output.xml .

Running Integration Tests

You can run the integration tests for the helm charts by running the next commands.

Before running the integration tests:

  • Install docker, helm, kubectl, and minikube, if you don't already have it on your local workstation.
    • You could also use docker with k8s enabled instead of minikube.
  • The integration tests will be executed on a running cluster. Each test will create a new POD that will connect to the server installed by our helm chart. Check this

Running integration tests for Nexus IQ:

  1. From source code: helm install iq ./charts/nexus-iq --wait
  2. Run the tests: helm test iq

Running integration tests for Nexus Repository Manager:

  1. From source code: helm install nxrm ./charts/nexus-repository-manager --wait
  2. Run the tests: helm test nxrm

Further Notes on Usage

Resolver File and Ingress-DNS

Get the default values.yaml for each chart.

  • Nexus Repository: helm show values sonatype/nexus-repository-manager > repo-values.yaml
  • Nexus IQ: helm show values sonatype/nexus-iq-server > iq-values.yaml

Edit the values file you just downloaded to enable ingress support, and install the chart with those values:

  • Nexus Repository: helm install nexus-repo sonatype/nexus-repository-manager -f repo-values.yaml
  • Nexus IQ: helm install nexus-iq sonatype/nexus-iq-server -f iq-values.yaml

If you want to use the custom values file for the demo environment that expose the apps on a local domain of *.demo which is done by creating a resolver file. On a Mac it's /etc/resolver/minikube-minikube-demo with the following entries:

domain demo
nameserver 192.168.64.8
search_order 1
timeout 5

You'll need to update the IP address to match the running instance's IP address. Use minikube ip to get the address

Docs for Ingress-dns are here https://github.com/kubernetes/minikube/tree/master/deploy/addons/ingress-dns

helm3-charts's People

Contributors

adrianpowell avatar amoreno-sonatype avatar andresortiz28 avatar bigspotteddog avatar bobotimi avatar cmyanko avatar collinpeters avatar eduard-tita avatar ethompsy avatar fblancosona avatar hectorhuol avatar jflinchbaugh avatar kakumara avatar kamaradeivanov avatar kellyrob99 avatar kentfrazier avatar koraytugay avatar lisadurant avatar mpuglin avatar mykyta avatar philoserf avatar pittolive avatar ruckc avatar scherzhaft avatar sonatype-ci avatar sonatype-zion avatar srini-hdp avatar tneer avatar tpokki avatar userseprid avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

helm3-charts's Issues

Add namespace support

Hi!

It would be a nice feature to have the chart (NXRM at least, did not try IQ yet) supporting namespaces, through the --namespace option of Helm. It seems that it can only be deployed to the default namespace for now.

Maybe i'm not doing things right with this chart, i'm a total beginner with Helm :)

wrong NOTES.txt in chart

Here you got chart NOTES.txt inivoked after chart installation to forward port 8081 to 80 in the pod:

1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=nexus-repository-manager,app.kubernetes.io/instance=nexus" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace default port-forward $POD_NAME 8081:80
  Your application is available at http://127.0.0.1

but pod which is created by chart is listening on port 8081 (as here

)

This line

kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8081:80
should be replaced by:

  kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8081:{{ $.Values.nexus.nexusPort }}

and line

Your application is available at http://127.0.0.1
to:

  Your application is available at http://127.0.0.1:8081

Http to Https

How to be secure the url , means like connection is secure ,when we hit in browser, It has to show like a secure know.
I am getting the not secure can you please help me in this it will helpful a lot for me

Ingress annotations are shared between web ingress and docker registry ingress

Hello,

Context:

  • We use helm chart to deploy Nexus on EKS using an ALB (application Load Balancer).
  • ALB use specifics ingress annotation for health check
    (full list here)
alb.ingress.kubernetes.io/healthcheck-port
alb.ingress.kubernetes.io/healthcheck-protocol
alb.ingress.kubernetes.io/healthcheck-path
alb.ingress.kubernetes.io/success-codes

Problem:
The health check URL for nexus web service is "/" with a return code 200
The health check URL for the docker registry service is "/v2/" with a return code 200 or 401

Actually the chart give the possibility to customise annotation only for all ingress

Idea of solution:

Add a nexus.docker.registries.ingress.annotations or a ingress.registries.annotations and add them to https://github.com/sonatype/helm3-charts/blob/main/charts/nexus-repository-manager/templates/ingress.yaml#L54 as extra annotation

[nexus-repository-manager] properties override results on permissions issues on nexus-data

On the values file I enabled the properties override a s follow:

properties:
  override: true
  data: 
    nexus.scripts.allowCreation: true

Then install failed with following error:

2021-03-19 11:11:31,229+0000 INFO  [jetty-main-1] *SYSTEM org.eclipse.jetty.server.session - node0 Stopped scavenging
2021-03-19 11:11:31,231+0000 ERROR [jetty-main-1] *SYSTEM org.sonatype.nexus.bootstrap.jetty.JettyServer - Failed to start
com.google.inject.ProvisionException: Unable to provision, see the following errors:

1) Error injecting constructor, java.lang.RuntimeException: java.nio.file.AccessDeniedException: /nexus-data/etc/logback
  at org.sonatype.nexus.internal.log.LogbackLoggerOverrides.<init>(LogbackLoggerOverrides.java:67)
  at / (via modules: org.sonatype.nexus.extender.modules.NexusBundleModule -> org.eclipse.sisu.space.SpaceModule)
  while locating org.sonatype.nexus.internal.log.LogbackLoggerOverrides
  while locating java.lang.Object annotated with *
  at org.eclipse.sisu.wire.LocatorWiring
  while locating org.sonatype.nexus.internal.log.LoggerOverrides
    for the 3rd parameter of org.sonatype.nexus.internal.log.LogbackLogManager.<init>(LogbackLogManager.java:86)
  at / (via modules: org.sonatype.nexus.extender.modules.NexusBundleModule -> org.eclipse.sisu.space.SpaceModule)
  while locating org.sonatype.nexus.internal.log.LogbackLogManager
  while locating java.lang.Object annotated with *

1 error
        at com.google.inject.internal.InternalProvisionException.toProvisionException(InternalProvisionException.java:226)
        at com.google.inject.internal.InjectorImpl$1.get(InjectorImpl.java:1097)
        at org.eclipse.sisu.inject.LazyBeanEntry.getValue(LazyBeanEntry.java:81)
        at org.sonatype.nexus.extender.NexusLifecycleManager.to(NexusLifecycleManager.java:111)
        at org.sonatype.nexus.extender.NexusContextListener.moveToPhase(NexusContextListener.java:321)
        at org.sonatype.nexus.extender.NexusContextListener.contextInitialized(NexusContextListener.java:181)
        at org.sonatype.nexus.bootstrap.osgi.ListenerTracker.addingService(ListenerTracker.java:47)
        at org.sonatype.nexus.bootstrap.osgi.ListenerTracker.addingService(ListenerTracker.java:1)
        at org.osgi.util.tracker.ServiceTracker$Tracked.customizerAdding(ServiceTracker.java:941)
        at org.osgi.util.tracker.ServiceTracker$Tracked.customizerAdding(ServiceTracker.java:870)
        at org.osgi.util.tracker.AbstractTracked.trackAdding(AbstractTracked.java:256)
        at org.osgi.util.tracker.AbstractTracked.trackInitial(AbstractTracked.java:183)
        at org.osgi.util.tracker.ServiceTracker.open(ServiceTracker.java:318)
        at org.osgi.util.tracker.ServiceTracker.open(ServiceTracker.java:261)
        at org.sonatype.nexus.bootstrap.osgi.BootstrapListener.contextInitialized(BootstrapListener.java:129)
        at org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:1068)
        at org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:572)
        at org.eclipse.jetty.server.handler.ContextHandler.contextInitialized(ContextHandler.java:997)
        at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:754)
        at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:379)
        at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1457)
        at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1422)
        at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:911)
        at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:288)
        at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:524)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110)
        at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97)
        at com.codahale.metrics.jetty9.InstrumentedHandler.doStart(InstrumentedHandler.java:101)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
        at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
        at org.eclipse.jetty.server.Server.start(Server.java:423)
        at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110)
        at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97)
        at org.eclipse.jetty.server.Server.doStart(Server.java:387)
        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
        at org.sonatype.nexus.bootstrap.jetty.JettyServer$JettyMainThread.run(JettyServer.java:274)
Caused by: java.lang.RuntimeException: java.nio.file.AccessDeniedException: /nexus-data/etc/logback
        at org.sonatype.nexus.internal.app.ApplicationDirectoriesImpl.mkdir(ApplicationDirectoriesImpl.java:116)
        at org.sonatype.nexus.internal.app.ApplicationDirectoriesImpl.resolve(ApplicationDirectoriesImpl.java:134)
        at org.sonatype.nexus.internal.app.ApplicationDirectoriesImpl.getWorkDirectory(ApplicationDirectoriesImpl.java:95)
        at org.sonatype.nexus.internal.app.ApplicationDirectoriesImpl.getWorkDirectory(ApplicationDirectoriesImpl.java:100)
        at org.sonatype.nexus.internal.log.LogbackLoggerOverrides.<init>(LogbackLoggerOverrides.java:69)
        at org.sonatype.nexus.internal.log.LogbackLoggerOverrides$$FastClassByGuice$$d577229d.newInstance(<generated>)
        at com.google.inject.internal.DefaultConstructionProxyFactory$FastClassProxy.newInstance(DefaultConstructionProxyFactory.java:89)
        at com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:114)
        at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:91)
        at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:306)
        at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
        at com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:168)
        at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:39)
        at com.google.inject.internal.FactoryProxy.get(FactoryProxy.java:62)
        at com.google.inject.internal.InjectorImpl$1.get(InjectorImpl.java:1094)
        at org.eclipse.sisu.inject.LazyBeanEntry.getValue(LazyBeanEntry.java:81)
        at org.eclipse.sisu.wire.BeanProviders.firstOf(BeanProviders.java:179)
        at org.eclipse.sisu.wire.BeanProviders$7.get(BeanProviders.java:160)
        at com.google.inject.internal.ProviderInternalFactory.provision(ProviderInternalFactory.java:85)
        at com.google.inject.internal.InternalFactoryToInitializableAdapter.provision(InternalFactoryToInitializableAdapter.java:57)
        at com.google.inject.internal.ProviderInternalFactory.circularGet(ProviderInternalFactory.java:59)
        at com.google.inject.internal.InternalFactoryToInitializableAdapter.get(InternalFactoryToInitializableAdapter.java:47)
        at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:42)
        at com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:65)
        at com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:113)
        at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:91)
        at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:306)
        at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40)
        at com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:168)
        at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:39)
        at com.google.inject.internal.FactoryProxy.get(FactoryProxy.java:62)
        at com.google.inject.internal.InjectorImpl$1.get(InjectorImpl.java:1094)
        ... 40 common frames omitted
Caused by: java.nio.file.AccessDeniedException: /nexus-data/etc/logback
        at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
        at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
        at java.nio.file.Files.createDirectory(Files.java:674)
        at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781)
        at java.nio.file.Files.createDirectories(Files.java:767)
        at org.sonatype.nexus.common.io.DirectoryHelper.mkdir(DirectoryHelper.java:143)
        at org.sonatype.nexus.internal.app.ApplicationDirectoriesImpl.mkdir(ApplicationDirectoriesImpl.java:110)
        ... 71 common frames omitted

I also tried with the following init container to fix the permissions, but that does not help

  deployment:
    initContainers:
    - name: fmp-volume-permission
      image: busybox
      imagePullPolicy: IfNotPresent
      command: ['chmod','-R', '777', '/nexus-data']
      volumeMounts:
        - name: nexus-repository-manager-data
          mountPath: /nexus-data

Kubernetes version:

Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.15-gke.7800", GitCommit:"cef3156c566a1d1a4b23ee360a760f45bfbaaac1", GitTreeState:"clean", BuildDate:"2020-12-14T09:12:37Z", GoVersion:"go1.13.15b4", Compiler:"gc", Platform:"linux/amd64"}

Ingress Issue

The ingress hosts are fixed within the ingress file template.
template/ingress.yaml
rules:
- host: iq-server.demo
http:
paths:
- path: {{ $ingressPath }}
backend:
serviceName: {{ $fullName }}
servicePort: 8070
- host: admin.iq-server.demo
http:
paths:
- path: {{ $ingressPath }}
backend:
serviceName: {{ $fullName }}
servicePort: 8071

Solution as follow:
template/ingress.yaml
rules:
- host: {{ (index .Values.ingress.hosts 0).host }}
http:
paths:
- path: {{ $ingressPath }}
backend:
serviceName: {{ $fullName }}
servicePort: 8070
- host: {{ (index .Values.ingress.hosts 1).host }}
http:
paths:
- path: {{ $ingressPath }}
backend:
serviceName: {{ $fullName }}
servicePort: 8071

nexus - volume error

Hi,

There's several volume issues.

item 1:
Recently volumeMounts: is newly added.

volumeMounts:
- mountPath: /nexus-data
name: {{ template "nexus.fullname" . }}-data
- mountPath: /nexus-data/backup
name: {{ template "nexus.fullname" . }}-backup

But there's no volumes:
so here's error.

coalesce.go:199: warning: destination for data is a table. Ignoring non-table value []
Error: Deployment.apps "nexus-stage-nexus-repository-manager" is invalid: [spec.template.spec.containers[0].volumeMounts[0].name: Not found: "nexus-stage-nexus-repository-manager-data", spec.template.spec.containers[0].volumeMounts[1].name: Not found: "nexus-stage-nexus-repository-manager-backup"]

so you need to add (below is example)

        - name: {{ template "nexus.fullname" . }}-data
          {{- if .Values.persistence.enabled }}
          persistentVolumeClaim:
            claimName: {{ .Values.persistence.existingClaim | default (printf "%s-%s" (include "nexus.fullname" .) "data") }}
          {{- else }}
          emptyDir: {}
          {{- end }}

item 2: And I think you don't support backup feature now.
remove

            - mountPath: /nexus-data/backup	
              name: {{ template "nexus.fullname" . }}-backup

item 3: config's volumes and volumeMounts needed
add like

          volumeMounts:
            {{- if .Values.config.enabled }}
            - mountPath: {{ .Values.config.mountPath }}
              name: {{ template "nexus.name" . }}-conf
            {{- end }}
      volumes:
        {{- if .Values.config.enabled }}
        - name: {{ template "nexus.name" . }}-conf
          configMap:
            name: {{ template "nexus.name" . }}-conf
        {{- end }}

Incorrect indentation

The annotations field indentation appears to be incorrect here:

While installing the nexus iq chart with annotations enabled for pvc, it throws an error:
error validating "": error validating data: ValidationError(PersistentVolumeClaim): unknown field "annotations" in io.k8s.api.core.v1.PersistentVolumeClaim

A quick glimpse at the api - kubectl explain PersistentVolumeClaim.metadata | less suggests that the annotations field should come under metadata resource at the same level as name and labels. The following seems to fix the issue for me.

metadata:
  name: {{ template "iqserver.fullname" . }}-data
  ...
  annotations:
  {{ toYaml .Values.persistence.annotations | indent 2 }}
  ...

Nexus Repository manager doesn't create emptyDir when persistence.enabled=false

I'm running the chart version 28.1.1 and set persistence=false but the pvc still been created and pods hang on pending state.

helm -n nexus install nexus sonatype/nexus-repository-manager --set-string persistence.enabled=false

helm pod describe:

  Type    Reason         Age                     From                         Message
  ----    ------         ----                    ----                         -------
  Normal  FailedBinding  <invalid> (x2 over 0s)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set

helm -n nexus get pvc:

NAME                                  STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nexus-nexus-repository-manager-data   Pending                                                     5m27s

Conflicting README Information Regarding Use of StatefulSets

It's listed in this README that in order to use a persistent volume to enable statefulSet instead of using the default deployment. However, the values.yaml says it's not supported, and there's no templates relating to a statefulSet.

How are you supposed to guarantee a pod is reconnected to its PVC without a statefulSet in the event that it's recreated?

Unable to execute HTTP request: Timeout waiting for connection from pool

Hello team,

We are running OSS Nexus Sonatype on helm kubernetes and, from time to time, we are receiving the following error: Unable to execute HTTP request: Timeout waiting for connection from pool
And we only need to restart the pod.

Could you, please, tell us what can be the problem here?
nexsus-logs.txt

Attached you can find all the logs. We also increased the Java memory for the process, but no luck.

Thank you,
-Ionut

Configmap name mismatch

iqserver.fullname is used for the configmap in the deployment:

- name: config-volume
configMap:
name: {{ template "iqserver.fullname" . }}

But it is generated as iqserver.fullname-conf

name: {{ template "iqserver.fullname" . }}-conf

Causing: MountVolume.SetUp failed for volume "config-volume" : configmap "nexus-iq-nexus-iq-server" not found

In my case the configmap was installed to nexus-iq-nexus-iq-server-conf which did not equal nexus-iq-nexus-iq-server and caused the error above.

[nexus-repository-manager] Bug in imagePullSecret

Hi,

there's imagePullSecrets

But in template,
used nexus.imagePullSecret,

{{- if .Values.nexus.imagePullSecret }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}

and please remove image-pull-secret.yaml file, you don't need to create secret for user.

Most of usecase, I think below syntax.

imagePullSecrets:
  - name: secret-name

Helm Repository not working

I followed the instructions provided at https://github.com/sonatype/helm3-charts/blob/main/docs/index.md

Adding the helm repository does not work:

$ helm repo add sonatype https://sonatype.github.io/helm3-charts/
Error: looks like "https://sonatype.github.io/helm3-charts/" is not a valid chart repository or cannot be reached: failed to fetch https://sonatype.github.io/helm3-charts/index.yaml : 404 Not Found

Here's my helm version:

$ helm version
version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"clean", GoVersion:"go1.16.5"}

several suggestions

Hi,

First of all, I am really looking forward providing helm chart officially. Now it is.

But I think this is early stage of chart.

  1. when I use statefulset.enabled: true this chart does not work.
    and I want to know nexus support (need) statefulset or not really.

  2. config does not support now.
    please refer to https://github.com/Oteemo/charts/blob/8f81bbda4913893fddeec7e6f9c033ce32e2a0a6/charts/sonatype-nexus/values.yaml#L207-L210
    Or how to use secret than configmap when I use for create file?

data:
  keycloak.json: |
    {
      "realm": "test",
      "auth-server-url": "https://company.net/keycloak/",
      "ssl-required": "external",
      "resource": "nexus",
      "credentials": {
        "secret": "blah"
      },
      "confidential-port": 0
    }
  1. someone can not use docker repo. so I think ingress.yaml needs below
{{ if .Values.ingress.hostDocker }}
  - host: <snip>
{{ end }}

Thanks,

[nexus-repository-manager] Bug in NOTES.txt

{{ . }} texts exists in templates/NOTES.txt file.
1. Get the application URL by running these commands: {{- if .Values.ingress.enabled }} http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $.Values.ingress.hostRepo }}{{ . }}

It causes printing toString() output of spec yaml on deployments...
[...id.commap[Capabilities:0xc0004cbb90 Chart:0xc00095c480 Files:map[.helmignore:[35 32 80 97 116 116 101 114 110 115 32 116 111 32 105 103 110 111 114 101 32 119 104 101 110 32 98 117 105 108 100 105 110 103 32 112 97 99 107 97 103 101 115 46 10 35...]

Image pull secrets not working as aspected

{{- if .Values.nexus.imagePullSecrets }}

      imagePullSecrets:
        {{- toYaml . | nindent 8 }}
      {{- end }}

should be something like :

      imagePullSecrets:
        {{- toYaml  .Values.nexus.imagePullSecrets  | nindent 8 }}
      {{- end }}

or :

      imagePullSecrets:
        {{- with ..Values.nexus.imagePullSecrets }}
          {{ toYaml . | nindent 8 }}
        {{- end }}
      {{- end }}

The imagePullSecret value inside


in on the main layer and not in the nexus chapter (but should be on nexus layer).

Nexus Behind Reverse Proxy Nginx

Hey Folks,
I've tried to follow this guide https://help.sonatype.com/repomanager3/installation/run-behind-a-reverse-proxy (and others) to access my nexus3 container.
However if I try to access it, I just see broken links as files can't be found.
I assume my nginx config is not working properly but I don't know which wheel needs to be turned here.
Is there anything special to be thought of with nexus in particular? Any basepath to be adjusted?

All hints highly appreciated.

My current nginx config:

apiVersion: v1
kind: ConfigMap
metadata:
  name: confnginx
data:
  nginx.conf: |

    upstream nexus-upstream {
      server nexus:8081;
    }
    server {
       listen 80;
       location /nexus/ {
       rewrite ^/nexus(.*) $1 break;
       proxy_pass http://nexus-upstream;
       proxy_set_header Host $host:$server_port;
       proxy_set_header X-Real-IP $remote_addr;
       proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
        }

My outcome within any browser:

image

Logs of the nginx which shows that required CSS files cant be found/accessed:

192.168.1.50 - - [02/Sep/2021:12:32:42 +0000] "GET /nexus/ HTTP/1.1" 200 8810 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36" "-" 2021/09/02 12:32:42 [error] 23#23: *1 open() "/etc/nginx/html/static/rapture/resources/loading-prod.css" failed (2: No such file or directory), client: 192.168.1.50, server: , request: "GET /static/rapture/resources/loading-prod.css?_v=3.33.1-01&_e=OSS HTTP/1.1", host: "192.168.1.51", referrer: "http://192.168.1.51/nexus/" 192.168.1.50 - - [02/Sep/2021:12:32:42 +0000] "GET /static/rapture/resources/loading-prod.css?_v=3.33.1-01&_e=OSS HTTP/1.1" 404 555 "http://192.168.1.51/nexus/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36" "-" 2021/09/02 12:32:42 [error] 23#23: *1 open() "/etc/nginx/html/static/rapture/resources/nexus-proui-plugin-prod.css" failed (2: No such file or directory), client: 192.168.1.50, server: , request: "GET /static/rapture/resources/nexus-proui-plugin-prod.css?_v=3.33.1-01&_e=OSS HTTP/1.1", host: "192.168.1.51", referrer: "http://192.168.1.51/nexus/" 192.168.1.50 - - [02/Sep/2021:12:32:42 +0000] "GET /static/rapture/resources/nexus-proui-plugin-prod.css?_v=3.33.1-01&_e=OSS HTTP/1.1" 404 555 "http://192.168.1.51/nexus/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36" "-" 192.168.1.50 - - [02/Sep/2021:12:32:42 +0000] "GET /static/nexus-proui-bundle.css?_v=3.33.1-01&_e=OSS HTTP/1.1" 404 555 "http://192.168.1.51/nexus/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36" "-" 2021/09/02 12:32:42 [error] 23#23: *3 open() "/etc/nginx/html/static/nexus-proui-bundle.css" failed (2: No such file or directory), client: 192.168.1.50, server: , request: "GET /static/nexus-proui-bundle.css?_v=3.33.1-01&_e=OSS HTTP/1.1", host: "192.168.1.51", referrer: "http://192.168.1.51/nexus/" 2021/09/02 12:32:42 [error] 23#23: *5 open() "/etc/nginx/html/static/rapture/resources/baseapp-prod.css" failed (2: No such file or directory), client: 192.168.1.50, server: , request: "GET /static/rapture/resources/baseapp-prod.css?_v=3.33.1-01&_e=OSS HTTP/1.1", host: "192.168.1.51", referrer: "http://192.168.1.51/nexus/" 192.168.1.50 - - [02/Sep/2021:12:32:42 +0000] "GET /static/rapture/resources/baseapp-prod.css?_v=3.33.1-01&_e=OSS HTTP/1.1" 404 555 "http://192.168.1.51/nexus/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36" "-" 2021/09/02 12:32:42 [error] 23#23: *4 open() "/etc/nginx/html/static/rapture/resources/nexus-rapture-prod.css" failed (2: No such file or directory), client: 192.168.1.50, server: , request: "GET /static/rapture/resources/nexus-rapture-prod.css?_v=3.33.1-01&_e=OSS HTTP/1.1", host: "192.168.1.51", referrer: "http://192.168.1.51/nexus/" 192.168.1.50 - - [02/Sep/2021:12:32:42 +0000] "GET /static/rapture/resources/nexus-rapture-prod.css?_v=3.33.1-01&_e=OSS HTTP/1.1" 404 555 "http://192.168.1.51/nexus/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36" "-" 2021/09/02 12:32:42 [error] 23#23: *7 open() "/etc/nginx/html/static/rapture/resources/nexus-coreui-plugin-prod.css" failed (2: No such file or directory), client: 192.168.1.50, server: , request: "GET /static/rapture/resources/nexus-coreui-plugin-prod.css?_v=3.33.1-01&_e=OSS HTTP/1.1", host: "192.168.1.51", referrer: "http://192.168.1.51/nexus/" 192.168.1.50 - - [02/Sep/2021:12:32:42 +0000] "GET /static/rapture/resources/nexus-coreui-plugin-prod.css?_v=3.33.1-01&_e=OSS HTTP/1.1" 404 555 "http://192.168.1.51/nexus/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36" "-" 2021/09/02 12:32:42 [error] 23#23: *6 open() "/etc/nginx/html/static/rapture/resources/nexus-proximanova-plugin-prod.css" failed (2: No such file or directory), client: 192.168.1.50, server: , request: "GET /static/rapture/resources/nexus-proximanova-plugin-prod.css?_v=3.33.1-01&_e=OSS HTTP/1.1", host: "192.168.1.51", referrer: "http://192.168.1.51/nexus/" 192.168.1.50 - - [02/Sep/2021:12:32:42 +0000] "GET /static/rapture/resources/nexus-proximanova-plugin-prod.css?_v=3.33.1-01&_e=OSS HTTP/1.1" 404 555 "http://192.168.1.51/nexus/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36" "-" 192.168.1.50 - - [02/Sep/2021:12:32:42 +0000] "GET /static/rapture/resources/nexus-onboarding-plugin-prod.css?_v=3.33.1-01&_e=OSS HTTP/1.1" 404 555 "http://192.168.1.51/nexus/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36" "-" 2021/09/02 12:32:42 [error] 23#23: *1 open() "/etc/nginx/html/static/rapture/resources/nexus-onboarding-plugin-prod.css" failed (2: No such file or directory), client: 192.168.1.50, server: , request: "GET /static/rapture/resources/nexus-onboarding-plugin-prod.css?_v=3.33.1-01&_e=OSS HTTP/1.1", host: "192.168.1.51", referrer: "http://192.168.1.51/nexus/" 2021/09/02 12:32:42 [error] 23#23: *3 open() "/etc/nginx/html/static/nexus-rapture-bundle.css" failed (2: No such file or directory), client: 192.168.1.50, server: , request: "GET /static/nexus-rapture-bundle.css?_v=3.33.1-01&_e=OSS HTTP/1.1", host: "192.168.1.51", referrer: "http://192.168.1.51/nexus/" 192.168.1.50 - - [02/Sep/2021:12:32:42 +0000] "GET /static/nexus-rapture-bundle.css?_v=3.33.1-01&_e=OSS HTTP/1.1" 404 555 "http://192.168.1.51/nexus/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36" "-" 192.168.1.50 - - [02/Sep/2021:12:32:42 +0000] "GET /static/nexus-coreui-bundle.css?_v=3.33.1-01&_e=OSS HTTP/1.1" 404 555 "http://192.168.1.51/nexus/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36" "-" 2021/09/02 12:32:42 [error] 23#23: *5 open() "/etc/nginx/html/static/nexus-coreui-bundle.css" failed (2: No such file or directory), client: 192.168.1.50, server: , request: "GET /static/nexus-coreui-bundle.css?_v=3.33.1-01&_e=OSS HTTP/1.1", host: "192.168.1.51", referrer: "http://192.168.1.51/nexus/" 2021/09/02 12:32:42 [error] 23#23: *3 open() "/etc/nginx/html/static/rapture/baseapp-prod.js" failed (2: No such file or directory), client: 192.168.1.50, server: , request: "GET /static/rapture/baseapp-prod.js?_v=3.33.1-01&_e=OSS HTTP/1.1", host: "192.168.1.51", referrer: "http://192.168.1.51/nexus/" 192.168.1.50 - - [02/Sep/2021:12:32:42 +0000] "GET /static/rapture/baseapp-prod.js?_v=3.33.1-01&_e=OSS HTTP/1.1" 404 555 "http://192.168.1.51/nexus/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36" "-" 2021/09/02 12:32:42 [error] 23#23: *5 open() "/etc/nginx/html/static/rapture/extdirect-prod.js" failed (2: No such file or directory), client: 192.168.1.50, server: , request: "GET /static/rapture/extdirect-prod.js?_v=3.33.1-01&_e=OSS HTTP/1.1", host: "192.168.1.51", referrer: "http://192.168.1.51/nexus/" 192.168.1.50 - - [02/Sep/2021:12:32:42 +0000] "GET /static/rapture/extdirect-prod.js?_v=3.33.1-01&_e=OSS HTTP/1.1" 404 555 "http://192.168.1.51/nexus/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36" "-" 2021/09/02 12:32:42 [error] 23#23: *1 open() "/etc/nginx/html/static/nexus-coreui-bundle.js" failed (2: No such file or directory), client: 192.168.1.50, server: , request: "GET /static/nexus-coreui-bundle.js?_v=3.33.1-01&_e=OSS HTTP/1.1", host: "192.168.1.51", referrer: "http://192.168.1.51/nexus/" 192.168.1.50 - - [02/Sep/2021:12:32:42 +0000] "GET /static/nexus-coreui-bundle.js?_v=3.33.1-01&_e=OSS HTTP/1.1" 404 555 "http://192.168.1.51/nexus/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36" "-" 2021/09/02 12:32:42 [error] 23#23: *6 open() "/etc/nginx/html/static/rapture/bootstrap.js" failed (2: No such file or directory), client: 192.168.1.50, server: , request: "GET /static/rapture/bootstrap.js?_v=3.33.1-01&_e=OSS HTTP/1.1", host: "192.168.1.51", referrer: "http://192.168.1.51/nexus/"

container repository ingress configuration

It would be nice if the container registry ingress configuration was better documented, we got it working after significant strife in reverse engineering how the template/ingress.yaml was being read.

Also, for cert-manager to generate certificates for both of our nexus and container registry domains, we ended up having to create a separate ingress rule as cert-manager didn't add both ingress.tls.hosts to the secret.

Separately, what about multiple container repositories? Can they be collapsed onto the same port or do we need a separate fqdn/certificate/port for each one?

Accessing Nexus repository manager password in a kubernetes pod

I have installed Sonatype nexus repository manager in my Kubernetes Cluster using the helm chart,

I am using Kyma installation.

Nexus repository manager got installed properly and i can access the application.

But it seems the login password file is in a pv volume claim /nexus-data attached in the pod.

Now whenever i am trying to access the pod with kubectl exec command

kubectl exec -i -t $POD_NAME -n dev -- /bin/sh
I am getting the following error

OCI runtime exec failed: exec failed: container_linux.go:367: starting container process caused: exec: "/bin/sh": stat /bin/sh: no such file or directory: unknown
I understand that this issue is because of the image does not offer shell functionality. Is there any other way i can access the password file present in the pvc ?

Provide option to use non-beta API, some beta APIs are now deprecated

Eventually, kubernetes 1.22 will come and those beta APIs will be removed
$ helm install nexus -f nexus-values.yaml sonatype/nexus-repository-manager --namespace nexus-project
W0501 22:30:31.490411 5728 warnings.go:67] networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress

Support to block /swagger-ui

Hi,

This is basically a feature request. It would be nice to have support in the ingress template to block /swagger-ui. Other people have asked for it: https://community.sonatype.com/t/is-it-possible-to-protect-or-disable-the-swagger-ui/6077

For AWS load balancer controller a flag in values file to block swagger-ui can trigger a conditional block that will add an additional rule:

- host: {{ .Values.ingress.hostRepo }}
    http:
      paths:
      - backend:
          serviceName: response-404
          servicePort: use-annotation
        path: /swagger-ui*

And the annotation to the ingress:

alb.ingress.kubernetes.io/actions.response-404: >
      {"Type":"fixed-response","FixedResponseConfig":{"ContentType":"text/plain","StatusCode":"404","MessageBody":"404 error text"}}

Chart fails when `nexus.extraLabels` is used

Adding a nexus.extraLabels mapping causes templating to fail with the following message:

Error: YAML parse error on nexus-repository-manager/templates/deployment.yaml: error converting YAML to JSON: yaml: line 11: mapping values are not allowed in this context

Looking at the output with --debug, it appears the the first entry in the extraLabels map is indented too far, and this is causing the problem.

The template files seem to use the syntax:

  {{- if .Values.nexus.extraLabels }}
    {{- with .Values.nexus.extraLabels }}
      {{ toYaml . | indent 4 }}
    {{- end }}
  {{- end }}

I believe switching from indent to nindent should fix the issue.

in NOTES.txt at $.Valuies.ingress.hostRepo.host can't evaluate field host

With 23.1.5 this evening, I get this:

Error: Failed to render chart: exit status 1: Error: template: nexus-repository-manager/templates/NOTES.txt:3:52: executing "nexus-repository-manager/templates/NOTES.txt" at <$.Values.ingress.hostRepo.host>: can't evaluate field host in type interface {}
Use --debug flag to render out invalid YAML
Error: plugin "diff" exited with error

nexus authentication configuration

It would be beneficial if the nexus-repository-manager supported configuring saml and/or ldap configuration through the helm chart deployment along with licenses. Also, what is the supported values for the configmap? Is it json, yaml, properties?

Use existing secrets

I use helmfile to deploy this chart, and need to be able to do this without checking in the licence and other sensitive information. Being able to use an existing secret for the licence and config.yml would help very much.

[sonatype/nexus-repository-manager] warning message when install

Hi,

as you know, there is warning message.
And please update NOTES message.

One more thing, please make helm chart is first-class deployment method of nexus, support fully by sonatype.
I think this chart is still not active than oteemo's actually.

$ k create ns nexus-test
namespace/nexus-test created
$ helm install -n nexus-test nexus-test -f nexus-test-values.yaml sonatype/nexus-repository-manager
coalesce.go:199: warning: destination for data is a table. Ignoring non-table value []
NAME: nexus-test
LAST DEPLOYED: Tue Dec  1 15:29:36 2020
NAMESPACE: nexus-test
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.