Code Monkey home page Code Monkey logo

charts's Introduction

  • šŸ‘‹ Hi, Iā€™m father of 4 year old baby šŸ‘ØšŸ¼ā€šŸ¼

charts's People

Contributors

7onn avatar alekhrycaiko avatar ashish1099 avatar brianwawok avatar cablespaghetti avatar cbarton avatar cvirus avatar dlay42 avatar imaemo avatar junkiebev avatar kjake avatar klavsklavsen avatar kongz avatar lkeijser avatar luizjr avatar paulopontesm avatar pparthesh avatar romulus-ai avatar rsvalerio avatar scottrigby avatar sekka1 avatar strainovic avatar titansmc avatar tklovett avatar tmeneau avatar velkovb avatar vlinevych avatar weixcloud avatar yevgeny-z avatar yokhahn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

charts's Issues

issue title

What happened:
This isn't a bug, i need ability to create multiple tcp ingress with different configurations.

For example i need create a TCP input without certificate, and another TCP input with certificate.
At present this isn't possible; i must to create a splitted service.yaml and applied manually.

What you expected to happen:
For example values.yaml could be these.

input:
    tcp:
      service:
        type: LoadBalancer
        loadBalancerIP:
      ports:
        - name: beats
          port: 5044
        - name: gelf
          port: 12201
    tcp:
      service:
        type: LoadBalancer
        loadBalancerIP:
        annotations:
          service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:XXXXXXXs:certificate/XXXX-XXXX--XXX-XXXX
      ports:
        - name: gelfhttp
          port: 12202

This should create a two different services with LoadBalancer type.

Pulling kubectl from a container rather than a URL

I've hit the same issue as documented (and fixed) in #25, but my proposed fix is slightly different (and in my opinion, better šŸ˜„)

Rather than pulling from a URL, I switched the init container image to bitnami/kubectl:<version> - a vaguely-official upstream image containing the latest kubectl binaries, and updated the init script to copy the binary from there.
This completely removes the need to pull the kubectl binary from an external service (removing any risk of potentially transient internet issues, handling proxy configuration in restricted networks, etc.)

I'm happy to submit a PR to implement the above, if it sounds like something you'd be happy with.

The Bitnami images contain 'just enough' operating system for the other init commands to function as expected, as they're based on minideb - there's no significant size/traffic increase compared to the existing setup, and caching docker images locally is much easier in large scale environments than needing a static internal webserver (in my experience)!

Input configuration

I am try using the graylog helm chart to get an instance run in aks. I am stuck while defining a simple input. The deployment works without errors but when i check the inputs on the graylog UI there is no input.
The input should only reach/accessable from services inside the cluster.

Thats my input definition:

ā€¦.
  input:
    tcp:
      service:
        name: mms-input
        type: ClusterIP
      ports:
        - name: gelfhttp
          port: 12201
ā€¦.

Is there anything i forgot or wrong?

Cannot migrate from stable 1.6.3 to kongz 1.7.0 version

Describe the bug

Error upgrading from stable/graylog to kongz/graylog. Reason:

Upgrade "graylog" failed: cannot patch "graylog" with kind StatefulSet: StatefulSet.apps "graylog" is invalid: spec.updateStrategy: Invalid value: apps.StatefulSetUpdateStrategy{Type:"Recreate", RollingUpdate:(*apps.RollingUpdateStatefulSetStrategy)(nil)}: must be 'RollingUpdate' or 'OnDelete' 

and

Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: ServiceAccount "graylog-mongodb" in namespace "graylog" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "graylog"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "graylog"

Version of Helm and Kubernetes:

Helm Version:

$ helm version
version.BuildInfo{Version:"v3.5.0", GitCommit:"32c22239423b3b4ba6706d450bd044baffdcf9e6", GitTreeState:"clean", GoVersion:"go1.15.6"}

Kubernetes Version:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:30:33Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.14-gke.1600", GitCommit:"7c407f5cc8632f9af5a2657f220963aa7f1c46e7", GitTreeState:"clean", BuildDate:"2020-12-07T09:22:27Z", GoVersion:"go1.13.15b4", Compiler:"gc", Platform:"linux/amd64"}

Which version of the chart:

1.7.0

What happened:

Upgrade "graylog" failed: cannot patch "graylog" with kind StatefulSet: StatefulSet.apps "graylog" is invalid: spec.updateStrategy: Invalid value: apps.StatefulSetUpdateStrategy{Type:"Recreate", RollingUpdate:(*apps.RollingUpdateStatefulSetStrategy)(nil)}: must be 'RollingUpdate' or 'OnDelete' 

and

Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: ServiceAccount "graylog-mongodb" in namespace "graylog" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "graylog"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "graylog"

What you expected to happen:

Upgrade passed.

How to reproduce it (as minimally and precisely as possible):

helm install graylog stable/graylog --version 1.6.3
helm upgrade graylog kongz/graylog --version 1.7.0

Anything else we need to know:

N/A

Helm3 selector label for statefulset

Describe the bug
Currently our chart for sts template has selector section which is defined the label like this
code here

    matchLabels:
      app.kubernetes.io/instance: graylog
      app.kubernetes.io/managed-by: Tiller
      app.kubernetes.io/name: graylog

It looks pretty well, but when we migrate our chart to Helm3. The label is slightly changed from

    selector:
      matchLabels:
        app.kubernetes.io/name: graylog
        app.kubernetes.io/instance: "hf-graylog"
ļæ½-       app.kubernetes.io/managed-by: "Tiller"
ļæ½+       app.kubernetes.io/managed-by: "Helm"

But this is impossible because selector field is immutable, therefore we cannot patch the STS like the way helm diff is showing.
Well-known error will be appeared

Error: UPGRADE FAILED: cannot patch "graylog" with kind StatefulSet: StatefulSet.apps "graylog" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden

Version of Helm and Kubernetes:

Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.2", GitCommit:"f5743093fd1c663cb0cbc89748f730662345d44d", GitTreeState:"clean", BuildDate:"2020-09-16T21:51:49Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.9-eks-d1db3c", GitCommit:"d1db3c46e55f95d6a7d3e5578689371318f95ff9", GitTreeState:"clean", BuildDate:"2020-10-20T22:18:07Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
helm3 version
version.BuildInfo{Version:"v3.4.1", GitCommit:"c4e74854886b2efe3321e185578e6db9be0a6e29", GitTreeState:"clean", GoVersion:"go1.14.11"}

Which version of the chart:
graylog-1.7.0

What happened:
Cannot patch/update for graylog statefulset

What you expected to happen:
Somehow the selector should not include
STS patch should ok

app.kubernetes.io/managed-by: "Helm"

How to reproduce it (as minimally and precisely as possible):

Just update helmChart to v3 and run helm-diff.

helm3 diff upgrade graylog test/graylog --allow-unreleased
Anything else we need to know:

Upgrade to 4.0.2 image fails. Entrypoint and ConfigMap need adjustments.

Describe the bug
The chart is unable to deploy using the newest docker image 4.0.2-1.

Version of Helm and Kubernetes:

Helm Version:

$ helm version
version.BuildInfo{Version:"v3.5.0", GitCommit:"32c22239423b3b4ba6706d450bd044baffdcf9e6", GitTreeState:"clean", GoVersion:"go1.15.6"}

Kubernetes Version:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"archive", BuildDate:"2020-11-25T13:19:56Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.5", GitCommit:"e338cf2c6d297aa603b50ad3a301f761b4173aa6", GitTreeState:"clean", BuildDate:"2020-12-09T11:10:32Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}

Which version of the chart: 1.7.2

What happened:
Just tried to upgrade to the new docker image using this chart and the release fails with missing plugins. The Graylog pod crashes with the error message:

2021-01-28 18:17:40,213 ERROR   [CmdLineTool] - Guice error (more detail on log level debug): No implementation for java.util.Map<org.graylog2.plugin.Version, javax.inject.Provider<org.graylog.events.search.MoreSearchAdapter>> was bound.
  Did you mean?
    org.graylog2.plugin.Version annotated with @com.google.inject.name.Named(value=elasticsearch_version) bound  at com.github.joschi.jadconfig.guice.NamedConfigParametersModule.registerParameters(NamedConfigParametersModule.java:80)

    org.graylog.events.search.MoreSearchAdapter bound  at org.graylog2.storage.VersionAwareStorageModule.configure(VersionAwareStorageModule.java:54)

    org.graylog2.plugin.Version annotated with interface org.graylog2.storage.ElasticsearchVersion bound  at org.graylog2.bindings.ElasticsearchModule.configure(ElasticsearchModule.java:28)
 - {}

I raised the issue to the Docker image project (Graylog2/graylog-docker#150). But it turns out, the official image changed the entrypoint. It handles plugin directories in a different way.

Since the chart injects a custom entrypoint, it is unaware of the new directory structure and fails to copy the necessary plugins in order to start Graylog correctly with basic Elasticsearch support.

What you expected to happen: The chart needs to support the official docker image as is.

How to reproduce it (as minimally and precisely as possible):
Manually update values.yaml with:

graylog:
  image:
    repository: "graylog/graylog:4.0.2-1"

Using TLS letsencryp from cert-manager

I am looking for a way how to put certmanager keys from secret to the installation, currently if i understand correctly i need to manually update serverFiles in values every 3 months.

Am I not noticing something?

Upgrade Chart to most recent GrayLog Version

Please upgrade the default Image to 4.1.2.
Images are available: https://hub.docker.com/r/graylog/graylog/tags?page=1&ordering=last_updated

Why don't you use the Maior.Minor Tag, like 4.0 or 4.1? Then you would need to update the Chart not so frequently
I've seen that it's documented in the source:

## Important note: Official Graylog Docker image may replace the existing Docker image tags and cause some corrupt when starting the pod.

But I don't understand why. Are the any specific Upgrade routines needed?

Thanks

gh action sync-readme workflow fails because gh-pages branch is protected

See https://github.com/KongZ/charts/runs/1364678753

remote: error: GH006: Protected branch update failed for refs/heads/gh-pages.
remote: error: At least 1 approving review is required by reviewers with write access.
To https://github.com/KongZ/charts
! [remote rejected] gh-pages -> gh-pages (protected branch hook declined)
error: failed to push some refs to 'https://github.com/KongZ/charts'
Error: Process completed with exit code 1.

Parent directory /usr/share/graylog/data/journal for Node ID file at /usr/share/graylog/data/journal/node-id is not writable

Hello everyone and thanks for your work.
I have a little issue when installing this chart.

I have already deployed mongo and elastic on my cluster.
The cluster is composed of 3 nodes (1 master, 2 workers) running with ubuntu 20.04.1 and all nodes use the same NFS server to create PVC with the help of the nfs-subdir-external-provisioner as storage class.

If i set the persistence to false, everything works well, but when i set it to true i have the following error.

/entrypoint.sh: line 10: /k8s/kubectl: No such file or directory /entrypoint.sh: line 11: /k8s/kubectl: No such file or directory Current master is Self IP is Launching graylog-0 as master /entrypoint.sh: line 17: /k8s/kubectl: No such file or directory Starting graylog Graylog Home /usr/share/graylog Graylog Plugin Dir /usr/share/graylog/plugin Graylog Elasticsearch Version 7 2021-09-17 14:40:36,942 ERROR [CmdLineTool] - Invalid configuration - {} com.github.joschi.jadconfig.ValidationException: Parent directory /usr/share/graylog/data/journal for Node ID file at /usr/share/graylog/data/journal/node-id is not writable at org.graylog2.Configuration$NodeIdFileValidator.validate(Configuration.java:377) ~[graylog.jar:?] at org.graylog2.Configuration$NodeIdFileValidator.validate(Configuration.java:359) ~[graylog.jar:?] at com.github.joschi.jadconfig.JadConfig.validateParameter(JadConfig.java:215) ~[graylog.jar:?] at com.github.joschi.jadconfig.JadConfig.processClassFields(JadConfig.java:148) ~[graylog.jar:?] at com.github.joschi.jadconfig.JadConfig.process(JadConfig.java:99) ~[graylog.jar:?] at org.graylog2.bootstrap.CmdLineTool.processConfiguration(CmdLineTool.java:420) [graylog.jar:?] at org.graylog2.bootstrap.CmdLineTool.run(CmdLineTool.java:236) [graylog.jar:?] at org.graylog2.bootstrap.Main.main(Main.java:45) [graylog.jar:?]

I tried to chown to 1100 the folder of the PVC created "journal-graylog-0" but it does not help.
I also tried to use privileged option but without success ...
Do you have an idea of what i am doing wrong here ?

Thanks

init container

Hello,

Can we have possibility to add one option for enable/disable the init container?

Thx

Graylog - IndexFieldTypePollerPeriodical Errors after graylog restart

Describe the bug
Installed latest version of Graylog chart, without built in Elasticsearch subchart. The Elasticsearch cluster is installed using official chart, version 7.13.2. After Graylog statefulset instance is restarted, logs are flooded with E11000 errors
Helm Version:v3.5.4
Kubernetes Version: 1.21.2
Chart version: 9.3.1

2021-07-12 18:26:46,126 ERROR   [IndexFieldTypePollerPeriodical] - Couldn't update field types for index set <Default index set/60ec872f122d06319a7f9c2d> - {}
com.mongodb.DuplicateKeyException: Write failed with error code 11000 and error message 'E11000 duplicate key error collection: graylog.index_field_types index: index_name_1 dup key: { index_name: "graylog_0" }'
	at com.mongodb.operation.BaseWriteOperation.convertBulkWriteException(BaseWriteOperation.java:191) ~[graylog.jar:?]
	at com.mongodb.operation.BaseWriteOperation.execute(BaseWriteOperation.java:155) ~[graylog.jar:?]
	at com.mongodb.operation.BaseWriteOperation.execute(BaseWriteOperation.java:52) ~[graylog.jar:?]
	at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:213) ~[graylog.jar:?]
	at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:182) ~[graylog.jar:?]
	at com.mongodb.DBCollection.executeWriteOperation(DBCollection.java:356) ~[graylog.jar:?]
	at com.mongodb.DBCollection.update(DBCollection.java:588) ~[graylog.jar:?]
	at com.mongodb.DBCollection.update(DBCollection.java:507) ~[graylog.jar:?]
	at com.mongodb.DBCollection.update(DBCollection.java:482) ~[graylog.jar:?]
	at com.mongodb.DBCollection.update(DBCollection.java:459) ~[graylog.jar:?]
	at org.mongojack.JacksonDBCollection.update(JacksonDBCollection.java:516) ~[graylog.jar:?]
	at org.mongojack.JacksonDBCollection.update(JacksonDBCollection.java:592) ~[graylog.jar:?]
	at org.graylog2.indexer.fieldtypes.IndexFieldTypesService.lambda$upsert$0(IndexFieldTypesService.java:82) ~[graylog.jar:?]
	at com.github.rholder.retry.AttemptTimeLimiters$NoAttemptTimeLimit.call(AttemptTimeLimiters.java:78) ~[graylog.jar:?]
	at com.github.rholder.retry.Retryer.call(Retryer.java:160) ~[graylog.jar:?]
	at org.graylog2.database.MongoDBUpsertRetryer.run(MongoDBUpsertRetryer.java:56) ~[graylog.jar:?]
	at org.graylog2.indexer.fieldtypes.IndexFieldTypesService.upsert(IndexFieldTypesService.java:82) ~[graylog.jar:?]
	at java.util.Optional.ifPresent(Optional.java:159) ~[?:1.8.0_282]
	at org.graylog2.indexer.fieldtypes.IndexFieldTypePollerPeriodical.lambda$schedule$4(IndexFieldTypePollerPeriodical.java:220) ~[graylog.jar:?]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_282]
	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_282]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_282]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_282]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_282]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_282]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_282]

Stateful set pulls kubectl binaries from public urls

Sorry to bother you again.
The statefulset.yaml setup initContainer tries to pull the kubectl binary from a public Url. This unfortunately fails for our internal (offline) cluster. Do you think it would be legit to make this download optional (enable flag, something like graylog.init.downloadKubectl)? I'd then use an image for the setup container that already contains kubectl.

wget https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kubectl -O /k8s/kubectl

Thanks!

runasnon root

Hello again,

can u add option security for kubernetes,

like values.yaml

containerSecurityContext:
enabled: true
runAsUser: 1001
runAsNonRoot: true

statefulset.yaml

resources:
{{ toYaml .Values.graylog.init.resources | indent 12 }}
{{- end }}
{{- if .Values.graylog.extraInitContainers }}
{{ toYaml .Values.graylog.extraInitContainers | indent 8 }}
{{- end }}
{{- if .Values.containerSecurityContext.enabled }}
securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 12 }}
{{- end }}

thx

Chart behind on versions

Latest graylog version is 4.2.3 (and 4.1.9) - both with a very important log4j security fix
Unfortunately this chart only uses 4.1.3 :(

Are there plans to upgrade it? would you accept a PR? or is it not being maintained anymore?

chart assumes client uses http if tls is not enabled directly on graylog

Problem is, if you f.ex. have a traefik in front and it manages certificates (and terminates TLS connection there), then I would like to set external_uri to https://mygraylogurl - instead of chart forcing http:// on everything :)

This is unfortunately very embedded in chart.. I'll try to do a PR if you'll accept this ?
Plan would be to add an externalUriTLS Boolean setting - and if thats true, override to use https in urls no matter what tls is set to.

Using existing secret for elastisearch hosts results in calls to https://https://<host>

Storing elasticsearch hosts in an existing secret results in the _helpers.tpl adding a schema which results in graylog calling https://https://:9200.

{{- if .Values.graylog.elasticsearch.uriSecretKey }}
{{- if .Values.graylog.elasticsearch.uriSSL }}
{{- printf "https://${GRAYLOG_ELASTICSEARCH_HOST}" -}}
{{- else }}
{{- printf "http://${GRAYLOG_ELASTICSEARCH_HOST}" -}}
{{- end }}
{{- else if .Values.graylog.elasticsearch.hosts }}

This is not a problem when you work with a single endpoint for elasticsearch but graylog allows you to supply a comma separated list of elasticsearch hosts which have to have a schema (otherwise graylog dies on startup).

/entrypoint.sh: line 17: /k8s/kubectl: No such file or directory

Describe the bug
Entrypoint points to a static kubectl file inside the pod and fails:
/entrypoint.sh: line 17: /k8s/kubectl: No such file or directory

Version of Helm and Kubernetes:

Helm Version:

$ helm version
flux helm-controller 0.12.1

Kubernetes Version:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.4", GitCommit:"c96aede7b5205121079932896c4ad89bb93260af", GitTreeState:"clean", BuildDate:"2020-06-17T11:41:22Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:15:20Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}

Which version of the chart:
1.8.10

What happened:
graylog fails to determine the MASRTER IP when running with multiple replicas because the entrypoint.sh configmap points to a missing kubectlfile

MASTER_IP=`/k8s/kubectl --namespace graylog get pod -o jsonpath='{range .items[*]}{.metadata.name} {.status.podIP}{"\n"}{end}' -l graylog-role=master --field-selector=status.phase=Running|awk '{print $2}'`
SELF_IP=`/k8s/kubectl --namespace graylog get pod $HOSTNAME -o jsonpath='{.status.podIP}'`

inside the pod:

graylog@graylog-0:~$ ls -la /k8s/
total 0
drwxrwxrwx. 2 root root  6 Nov 15 17:00 .
drwxr-xr-x. 1 root root 71 Nov 15 17:03 ..

What you expected to happen:
Graylog cannot run with replicas as the master ip fails to

How to reproduce it (as minimally and precisely as possible):
see above

Anything else we need to know:

[graylog] Add priorityClass definitions

Describe the bug
There is no way to configure priority classes for the Graylog Helm Chart pods.

Version of Helm and Kubernetes:

Helm Version: 3.5.0

Kubernetes Version: v1.18.20

Which version of the chart: 1.7.10

What happened: Need to set priority to the Graylog pods using K8s priority classes.

What you expected to happen: A way to set the priorityClassName for the pods in values.yaml

Describe the solution you'd like (as minimally and precisely as possible):

Add the graylog.priorityClassName parameter in values.yaml.
This parameter should be defined in the statefulset spec:

 template:
    ...
    spec:
      ...
      containers:
      ...
      priorityClassName: {{ .Values.priorityClassName }}

init container with support for https proxy

Describe the bug

When in an environment that requires an http/s proxy wget from the init container fails to download kubectl binary.

Connecting to 172.28.33.135:3128 (172.28.33.135:3128)
wget: error getting response
chmod: /k8s/kubectl: No such file or directory
/entrypoint.sh: line 10: /k8s/kubectl: No such file or directory
/entrypoint.sh: line 11: /k8s/kubectl: No such file or directory
Current master is 
Self IP is 
Launching graylog-0 as master
/entrypoint.sh: line 17: /k8s/kubectl: No such file or directory

https://gitlab.alpinelinux.org/alpine/aports/-/issues/10446

I've temporarily hacked another image which includes a 'real' version of wget but it's less than ideal for several reasons.

Any chance we can switch the default to another image that has a real version of wget already installed? Or use curl?

Unable to get elasticsearch working

Installed using the helm chart, stuck at this:

{
"cluster_name" : "elasticsearch",
"status" : "red",
"timed_out" : false,
"number_of_nodes" : 5,
"number_of_data_nodes" : 3,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : "NaN"
}

Graylog shows Elasticsearch cluster is red. Shards: 0 active, 0 initializing, 0 relocating, 0 unassigned

Any ideas how to get this fixed or where to start debugging?

Source IP is logged with internal cluster

This might not be related to graylog at all but I am new to kube so it's still a giant mess for me.

I have Ingress Controller as loadbalancer with TCP forwarding rule 12222: graylog/graylog-tcp:12222
I have GrayLog input configured as ClusterIP with port 12222

it works fine and I am able to connect via telnet. But all logs are showing IP 10.244.2.200 which i assume is some local cluster IP in my kubernetes.

How should i resolve this? is this issue of the Ingress Controller or is it some config in the graylog service?

Thanks

my helm values:

graylog:
  input:
    tcp:
      service:
        type: ClusterIP
      ports:
        - name: rawtcp
          port: 12222

Error connecting to mongodb from graylog

Describe the bug
When deploying chart in a Kubernetes cluster where domain name is not "cluster.local", graylog pod fails to connect to mongodb due to service name is forced to ".cluster.local"

2020-11-20 10:27:01,112 INFO [cluster] - Exception in monitor thread while connecting to server graylog-mongodb.msp-graylog.svc.cluster.local:27017 - {}
com.mongodb.MongoSocketException: graylog-mongodb.msp-graylog.svc.cluster.local: Name or service not known

Version of Helm and Kubernetes:

Helm Version:

$ helm version
version.BuildInfo{Version:"v3.4.1", GitCommit:"c4e74854886b2efe3321e185578e6db9be0a6e29", GitTreeState:"clean", GoVersion:"go1.14.11"}

Kubernetes Version:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-11T13:17:17Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:41:49Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}

Which version of the chart:
1.7

What happened:
Graylog-0 pod not connecting to mongodb
Graylog is looking for service "graylog-mongodb.msp-graylog.svc.cluster.local" instead of "graylog-mongodb.msp-graylog.svc.MY-CLUSTER-DOMAIN.local"

What you expected to happen:
Connection to MongoDB

How to reproduce it (as minimally and precisely as possible):

<~--
This could be something like:

values.yaml (only put values which differ from the defaults)

key: value
helm install my-release {{ .GitHubOrg }}/name-of-chart --version version --values values.yaml

-->
kubernetes domain is not "cluster.local"
helm -n msp-graylog install graylog kongz/graylog

Anything else we need to know:

Journal Size not configurable

Describe the bug
The Helm chart is not able to configure the size of the journal. By default this is 5GB, however, there are cases, especially in K8S (i.e. when all filebeats are restarted at once), where massive peaks needs to be handled. Here a bigger journal size is a must have.

Version of Helm and Kubernetes:

Helm Version:
version.BuildInfo{Version:"v3.3.0-rc.1", GitCommit:"5c2dfaad847df2ac8f289d278186d048f446c70c", GitTreeState:"dirty", GoVersion:"go1.14.4"}

Kubernetes Version:
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T21:03:42Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T20:55:23Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}

Which version of the chart:
graylog-1.7.11

What happened:
See above, its not a bug, its a feature request.

What you expected to happen:
Make journal size configurable

How to reproduce it (as minimally and precisely as possible):
Anything else we need to know:

Graylog self-address as master or coordinating node not secure

The current way to elect master or slave nodes is poorly implemented.

As it stands, the init-container needs an external dependency with kubectl:

{{- if .Values.graylog.init.kubectlLocation }}
wget {{ .Values.graylog.init.kubectlLocation }} -O /k8s/kubectl
{{- else }}
wget https://storage.googleapis.com/kubernetes-release/release/{{ .Values.graylog.init.kubectlVersion | default .Capabilities.KubeVersion.Version }}/bin/linux/amd64/kubectl -O /k8s/kubectl
{{- end }}
chmod +x /k8s/kubectl

Which is then used to query the k8s nodes to understand whether the current StatefulSet is living in a master or worker node:

# Looking for Master IP
MASTER_IP=`/k8s/kubectl --namespace {{ .Release.Namespace }} get pod -o jsonpath='{range .items[*]}{.metadata.name} {.status.podIP}{"\n"}{end}' -l graylog-role=master --field-selector=status.phase=Running|awk '{print $2}'`
SELF_IP=`/k8s/kubectl --namespace {{ .Release.Namespace }} get pod $HOSTNAME -o jsonpath='{.status.podIP}'`
echo "Current master is $MASTER_IP"
echo "Self IP is $SELF_IP"
if [[ -z "$MASTER_IP" ]]; then
echo "Launching $HOSTNAME as master"
export GRAYLOG_IS_MASTER="true"
/k8s/kubectl --namespace {{ .Release.Namespace }} label --overwrite pod $HOSTNAME graylog-role="master"
else
# When container was recreated or restart, MASTER_IP == SELF_IP, running as master and no need to change label graylog-role="master"
if [ "$SELF_IP" == "$MASTER_IP" ];then
export GRAYLOG_IS_MASTER="true"
else
# MASTER_IP != SELF_IP, running as coordinating
echo "Launching $HOSTNAME as coordinating"
export GRAYLOG_IS_MASTER="false"
/k8s/kubectl --namespace {{ .Release.Namespace }} label --overwrite pod $HOSTNAME graylog-role="coordinating"
fi
fi

This has two main problems:

  1. Introduces a dependency on having a way to download kubectl locally (on-prem deployments or bare-metal ones may not have access to the internet)
  2. This breaks the k8s abstraction of making sure that workloads do not need to interface with the Kubernetes API.

I propose to abandon this method, and use a more sensible way to signal each StatefulSet what to do (I'm not too familiar with the product, I'm more than happy to work on a helm-side solution)

Graylog not running due to MongoDB {cannot resolve host Issue and {Name or service not known issue}}

Hi,
I am running the Helm chart version https://artifacthub.io/packages/helm/kong-z/graylog/1.7.4

When MongoDB pod starts it gives me error

2021-04-30T21:15:39.280496009Z mongodb 21:15:39.27 INFO  ==> ** Starting MongoDB setup **
2021-04-30T21:15:39.295730088Z mongodb 21:15:39.29 INFO  ==> Validating settings in MONGODB_* env vars...
2021-04-30T21:15:39.297866239Z mongodb 21:15:39.29 WARN  ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
2021-04-30T21:15:39.316485058Z mongodb 21:15:39.31 INFO  ==> Initializing MongoDB...
2021-04-30T21:15:39.338779337Z mongodb 21:15:39.33 INFO  ==> Deploying MongoDB from scratch...
2021-04-30T21:15:40.282615079Z mongodb 21:15:40.28 INFO  ==> Creating users...
2021-04-30T21:15:40.284855437Z mongodb 21:15:40.28 INFO  ==> Users created
2021-04-30T21:15:40.310032821Z mongodb 21:15:40.30 INFO  ==> Configuring MongoDB replica set...
2021-04-30T21:15:40.317475548Z mongodb 21:15:40.31 INFO  ==> Stopping MongoDB...
2021-04-30T21:15:42.834718518Z mongodb 21:15:42.83 INFO  ==> Trying to connect to MongoDB server graylog-mongodb-0.graylog-mongodb-headless.svfb.svc.cluster.local...
2021-04-30T21:15:42.913794118Z cannot resolve host "graylog-mongodb-0.graylog-mongodb-headless.svfb.svc.cluster.local": lookup graylog-mongodb-0.graylog-mongodb-headless.svfb.svc.cluster.local: no such host
2021-04-30T21:15:47.932019406Z cannot resolve host "graylog-mongodb-0.graylog-mongodb-headless.svfb.svc.cluster.local": lookup graylog-mongodb-0.graylog-mongodb-headless.svfb.svc.cluster.local: no such host
2021-04-30T21:15:52.951088854Z cannot resolve host "graylog-mongodb-0.graylog-mongodb-headless.svfb.svc.cluster.local": lookup graylog-mongodb-0.graylog-mongodb-headless.svfb.svc.cluster.local: no such host
2021-04-30T21:15:58.029791824Z cannot resolve host "graylog-mongodb-0.graylog-mongodb-headless.svfb.svc.cluster.local": lookup graylog-mongodb-0.graylog-mongodb-headless.svfb.svc.cluster.local: no such host
2021-04-30T21:16:03.110453834Z cannot resolve host "graylog-mongodb-0.graylog-mongodb-headless.svfb.svc.cluster.local": lookup graylog-mongodb-0.graylog-mongodb-headless.svfb.svc.cluster.local: no such host

Anyone have an idea on what I can do to solve this issue ?

In the meantime on the Graylog side

2021-04-30T21:16:48.425748931Z 2021-04-30 21:16:48,425 INFO    [LogManager] - Logs loading complete. - {}
2021-04-30T21:16:48.428776245Z 2021-04-30 21:16:48,428 INFO    [KafkaJournal] - Initialized Kafka based journal at /usr/share/graylog/data/journal - {}
2021-04-30T21:16:48.454634976Z 2021-04-30 21:16:48,454 INFO    [cluster] - Cluster created with settings {hosts=[graylog-mongodb-headless.svfb.svc.cluster.local:27017], mode=MULTIPLE, requiredClusterType=REPLICA_SET, serverSelectionTimeout='30000 ms', maxWaitQueueSize=5000, requiredReplicaSetName='rs0'} - {}
2021-04-30T21:16:48.454684180Z 2021-04-30 21:16:48,454 INFO    [cluster] - Adding discovered server graylog-mongodb-headless.svfb.svc.cluster.local:27017 to client view of cluster - {}
2021-04-30T21:16:48.540199336Z 2021-04-30 21:16:48,539 INFO    [cluster] - No server chosen by com.mongodb.client.internal.MongoClientDelegate$1@6dded900 from cluster description ClusterDescription{type=REPLICA_SET, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address=graylog-mongodb-headless.svfb.svc.cluster.local:27017, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out - {}
2021-04-30T21:16:48.546681095Z 2021-04-30 21:16:48,539 INFO    [cluster] - Exception in monitor thread while connecting to server graylog-mongodb-headless.svfb.svc.cluster.local:27017 - {}
2021-04-30T21:16:48.546699096Z com.mongodb.MongoSocketException: graylog-mongodb-headless.svfb.svc.cluster.local: Name or service not known
2021-04-30T21:16:48.546704297Z 	at com.mongodb.ServerAddress.getSocketAddresses(ServerAddress.java:211) ~[graylog.jar:?]
2021-04-30T21:16:48.546708997Z 	at com.mongodb.internal.connection.SocketStream.initializeSocket(SocketStream.java:75) ~[graylog.jar:?]
2021-04-30T21:16:48.546713597Z 	at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:65) ~[graylog.jar:?]
2021-04-30T21:16:48.546717898Z 	at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:128) ~[graylog.jar:?]
2021-04-30T21:16:48.546722298Z 	at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117) [graylog.jar:?]
2021-04-30T21:16:48.546727598Z 	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_282]
2021-04-30T21:16:48.546732199Z Caused by: java.net.UnknownHostException: graylog-mongodb-headless.svfb.svc.cluster.local: Name or service not known
2021-04-30T21:16:48.546736499Z 	at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) ~[?:1.8.0_282]
2021-04-30T21:16:48.546740599Z 	at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929) ~[?:1.8.0_282]
2021-04-30T21:16:48.546744800Z 	at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324) ~[?:1.8.0_282]
2021-04-30T21:16:48.546749000Z 	at java.net.InetAddress.getAllByName0(InetAddress.java:1277) ~[?:1.8.0_282]
2021-04-30T21:16:48.546753200Z 	at java.net.InetAddress.getAllByName(InetAddress.java:1193) ~[?:1.8.0_282]
2021-04-30T21:16:48.546757400Z 	at java.net.InetAddress.getAllByName(InetAddress.java:1127) ~[?:1.8.0_282]
2021-04-30T21:16:48.546761401Z 	at com.mongodb.ServerAddress.getSocketAddresses(ServerAddress.java:203) ~[graylog.jar:?]
2021-04-30T21:16:48.546765701Z 	... 5 more

Helm Chart with little Modifications

# Default values for Graylog.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

rbac:
  # Specifies whether RBAC resources should be created
  ##
  create: true

serviceAccount:
  # Specifies whether a ServiceAccount should be created
  ##
  create: true
  # The name of the ServiceAccount to use.
  # If not set and create is true, a name is generated using the fullname template
  ##
  name:

graylog:

  persistence:
    ## If true, Graylog will create/use a Persistent Volume Claim
    ## If false, use emptyDir
    ##
    enabled: true
    ## Graylog data Persistent Volume access modes
    ## Must match those of existing PV or dynamic provisioner
    ## Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
    ##
    accessMode: ReadWriteOnce
    ## Graylog data Persistent Volume size
    ##
    size: "20Gi"
    ## Graylog data Persistent Volume Storage Class
    ## If defined, storageClassName: <storageClass>
    ## If set to "-", storageClassName: "", which disables dynamic provisioning
    ## If undefined (the default) or set to null, no storageClassName spec is
    ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
    ##   GKE, AWS & OpenStack)
    ##
    # storageClass: "ssd"

  ## Additional plugins you need to install on Graylog.
  ##
  plugins: []
    # - name: graylog-plugin-slack-2.7.1.jar
    #   url: https://github.com/omise/graylog-plugin-slack/releases/download/2.7.1/graylog-plugin-slack-2.7.1.jar
    # - name: graylog-plugin-function-check-diff-1.0.0.jar
    #   url: https://github.com/omise/graylog-plugin-function-check-diff/releases/download/1.0.0/graylog-plugin-function-check-diff-1.0.0.jar
    # - name: graylog-plugin-custom-alert-condition-1.0.0.jar
    #   url: https://github.com/omise/graylog-plugin-custom-alert-condition/releases/download/v1.0.0/graylog-plugin-custom-alert-condition-1.0.0.jar
    # - name: graylog-plugin-auth-sso-3.0.0.jar
    #   url: https://github.com/Graylog2/graylog-plugin-auth-sso/releases/download/3.0.0/graylog-plugin-auth-sso-3.0.0.jar

  ## A service for Graylog web interface
  ##
  service:
    type: ClusterIP
    port: 9000

    master:
      ## Graylog master service Ingress annotations
      ##
      annotations:
        service.beta.kubernetes.io/azure-load-balancer-internal: "true"
      ## Graylog master service port.
      ##

  ## Additional input ports for receiving logs from servers
  ## Note: Name must be in IANA_SVC_NAME (at most 15 characters, matching regex [a-z0-9]([a-z0-9-]*[a-z0-9])* and it must contains at least one letter [a-z], hyphens cannot be adjacent to other hyphens)
  ## Note: Array must be sorted by port order
  ##
  input:
    tcp:
      service:
        type: LoadBalancer
        annotations:
          service.beta.kubernetes.io/azure-load-balancer-internal: "true"
      ports:
        - name: raw-tcp-6655
          port: 6655
        - name: gelf-tcp-12201
          port: 12201
        - name: filebeat-5044
          port: 5044

  metrics:
    ## If true, prometheus annotations will be attached
    ##
    enabled: true

  ingress:
    ## If true, Graylog server Ingress will be created
    ##
    enabled: true
    ## Graylog server Ingress annotations
    ##
    annotations:
      kubernetes.io/ingress.class: nginx
      service.beta.kubernetes.io/azure-load-balancer-internal: "true"
    ## Graylog server Ingress hostnames with optional path
    ## Must be provided if Ingress is enabled
    ## Note: Graylog does not support two URL. You can specify only single URL
    ##
    hosts:
       - graylog-test.${config_DOMAIN}

    ## Graylog server Ingress TLS configuration
    ## Secrets must be manually created in the namespace
    ##
    tls:
    #   - secretName: graylog-server-tls
       - hosts:
           - graylog.${config_DOMAIN}
  provisioner:
      enabled: true
      script: |
        json='{
          "title":"Raw TCP",
          "type":"org.graylog2.inputs.raw.tcp.RawTCPInput",
          "configuration":{
            "bind_address":"0.0.0.0",
            "port":6655
            },
          "global":true
        }'
        printf "Launching Raw TCP..."
        # 5 min for the first request to wait for graylog-web init
        curl --output /dev/null --silent \\
          -u "admin:$GRAYLOG_PASSWORD_SECRET" \\
          -H "Content-Type: application/json" -H "x-requested-by: localhost" \\
          -d "$json" http://graylog-web:9000/api/system/inputs \\
          --connect-timeout 10 \\
          --retry 20 \\
          --max-time 20 \\
          --retry-max-time 300 \\
          --retry-connrefused

        if [[ $? -ne 0 ]]; then
          echo " Creation failed."
        else
          echo
        fi

        json='{
          "title":"GELF TCP",
          "type":"org.graylog2.inputs.gelf.tcp.GELFTCPInput",
          "configuration":{
            "bind_address":"0.0.0.0",
            "port":12201
          },
          "global":true
        }'
        printf "Launching GELF TCP..."
        curl --output /dev/null --silent \\
          -u "admin:$GRAYLOG_PASSWORD_SECRET" \\
          -H "Content-Type: application/json" -H "x-requested-by: localhost" \\
          -d "$json" http://graylog-web:9000/api/system/inputs \\
          --connect-timeout 10 \\
          --retry 20 \\
          --max-time 20 \\
          --retry-max-time 120 \\
          --retry-connrefused

        if [[ $? -ne 0 ]]; then
          echo " Creation failed."
        else
          echo
        fi

        json='{
          "title":"Raw UDP",
          "type":"org.graylog2.inputs.raw.udp.RawUDPInput",
          "configuration":{
            "bind_address":"0.0.0.0",
            "port":6556
          },
          "global":true
        }'
        printf "Launching Raw UDP..."
        curl --output /dev/null --silent \\
          -u "admin:$GRAYLOG_PASSWORD_SECRET" \\
          -H "Content-Type: application/json" -H "x-requested-by: localhost" \\
          -d "$json" http://graylog-web:9000/api/system/inputs \\
          --connect-timeout 10 \\
          --retry 20 \\
          --max-time 20 \\
          --retry-max-time 120 \\
          --retry-connrefused

        if [[ $? -ne 0 ]]; then
          echo " Creation failed."
        else
          echo
        fi

        json='{
          "title":"GELF UDP",
          "type":"org.graylog2.inputs.gelf.udp.GELFUDPInput",
          "configuration":{
            "bind_address":"0.0.0.0",
            "port":6601
          },
          "global":true
        }'
        printf "Launching GELF UDP..."
        curl --output /dev/null --silent \\
          -u "admin:$GRAYLOG_PASSWORD_SECRET" \\
          -H "Content-Type: application/json" -H "x-requested-by: localhost" \\
          -d "$json" http://graylog-web:9000/api/system/inputs \\
          --connect-timeout 10 \\
          --retry 20 \\
          --max-time 20 \\
          --retry-max-time 120 \\
          --retry-connrefused

        if [[ $? -ne 0 ]]; then
          echo " Creation failed."
        else
          echo
        fi

        json='{
          "title":"Filebeats",
          "type":"org.graylog.plugins.beats.Beats2Input",
          "configuration":{
            "bind_address":"0.0.0.0",
            "port":5044,
            "no_beats_prefix":false
          },
          "global":true
        }'
        printf "Launching Beats2Input..."
        curl --output /dev/null --silent \\
          -u "admin:$GRAYLOG_PASSWORD_SECRET" \\
          -H "Content-Type: application/json" -H "x-requested-by: localhost" \\
          -d "$json" http://graylog-web:9000/api/system/inputs \\
          --connect-timeout 10 \\
          --retry 20 \\
          --max-time 20 \\
          --retry-max-time 120 \\
          --retry-connrefused

        if [[ $? -ne 0 ]]; then
          echo " Creation failed."
        else
          echo
        fi

## Specify Elasticsearch version from requirement dependencies. Ignore this seection if you install Elasticsearch manually.
## Note: Graylog 2.4 requires Elasticsearch version <= 5.6
elasticsearch:
  image:
    repository: "docker.elastic.co/elasticsearch/elasticsearch-oss"
    tag: "6.8.13"
  client:
    replicas: 2
  master:
    replicas: 3
  data:
    replicas: 2
  cluster:
    env:
      MINIMUM_MASTER_NODES: 1
    xpackEnable: false

mongodb:
  architecture: "replicaset"
  useStatefulSet: true
  replicaCount: 3
  auth:
    enabled: false

Please let me know if additional information is needed ?
Original plan is to migrate from Stable chart version 1.6.10
Not sure what i am doing wrong here .

Not able to see the results in UI

When deploying the Graylog using helm chart, Not able to see the logs in the UI.
-->

Describe the bug
A clear and concise description of what the bug is.

Version of Helm and Kubernetes:

Helm Version:v3.4.0+g7090a89
kubernetes: 1.19

$ helm version

v3.4.0+g7090a89

Kubernetes Version:

$ kubectl version

v1.19.7
Which version of the chart:

graylog graylog 1 2021-06-08 14:03:13.269769401 +0000 UTC deployed graylog-1.7.10 4.0.6
What happened:

Unable to see the Logs in the ui

What you expected to happen:

I should able to see the logs in UI

How to reproduce it (as minimally and precisely as possible):

Deploy the helm chart and expose it using loadbalncer.

Anything else we need to know:
graylog_github
gratlog_upload

Remove deprecated Elasticsearch dependency

Describe the bug
Chart should use the official Elasticsearch chart instead of deprecated stable/elasticsearch.

Which version of the chart:
1.7.0

Up for debate:
I'm not sure if a migration tool should be looked into, if needed, to make it easier to update for users running the deprecated Elasticsearch version.

web-UI of graylog throwing an error for version less than current of 4.1.0 ( current in chart is 4.0.6 )

Describe the bug
web-UI of graylog throwing an error for version less than current of 4.1.0 ( current in chart is 4.0.6 )

" You are running an outdated Graylog version. (triggered an hour ago)
The most recent stable Graylog version is 4.1.0 (Noir) released at 2021-06-23T00:00:00.000Z. Get it from https://www.graylog.org/."

Version of Helm and Kubernetes:

Helm Version:

$ helm version
version.BuildInfo{Version:"v3.2.4", GitCommit:"0ad800ef43d3b826f31a5ad8dfbb4fe05d143688", GitTreeState:"clean", GoVersion:"go1.13.12"}

Kubernetes Version:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:25:06Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}

Which version of the chart:
both version: 1.7.10 and version: 1.7.11

What happened:
The alerts bell had an alert so I went and looked at it.

What you expected to happen:
No alert bell?

How to reproduce it (as minimally and precisely as possible):
Reinstall a fresh instance.

Anything else we need to know:

Add ability to create AWS load balancer with internal scheme.

Describe the problem
This chart create a AWS load balancer with scheme "internet-facing", please allow create load balancer with scheme internal.

What you expected to happen:
Plase add some field to values.yaml and allow or deny create internal load balancer.

Maybe something like that.
input: tcp: service: type: LoadBalancer **scheme: internal** loadBalancerIP: ports: - name: gelf port: 12222

and add to service-tcp.yaml this annotations

annotations: service.beta.kubernetes.io/aws-load-balancer-internal: "true"

not able to get indices from elasticsearch

Hi,

Thanks for giving us this chart. I am facing few problem to make the graylog up.

I have deployed the chart using helm3 in a kubernetes cluster. All pods are up and running. but graylog gives an error.


Caused by: org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
        at org.elasticsearch.cluster.block.ClusterBlocks.indicesBlockedException(ClusterBlocks.java:229) ~[elasticsearch-6.8.13.jar:6.8.13]
        at org.elasticsearch.action.admin.indices.alias.get.TransportGetAliasesAction.checkBlock(TransportGetAliasesAction.java:57) ~[elasticsearch-6.8.13.jar:6.8.13]
        at org.elasticsearch.action.admin.indices.alias.get.TransportGetAliasesAction.checkBlock(TransportGetAliasesAction.java:39) ~[elasticsearch-6.8.13.jar:6.8.13]
...... omitted

The elasticsearch cluster status is red and i am not able to view any indices and it shows below error.

{"error":{"root_cause":[{"type":"cluster_block_exception","reason":"blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];"}],"type":"cluster_block_exception","reason":"blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];"},"status":503}

Can you please help me how to solve this.

Thank you.

Regards,
James

Increase the ES index " total_fields_limit" to 3000 or make it adjustible

Hi, Recently we have faced an issue where Graylog was giving errors about index limit of 1000 has been reached - and this issue keeps happening.
since ES default is 1000 and this doesn't look to be enough - so it should update its index template called graylog-internal to adjust this, so that whatever the new index gets created it gets created with the new index limit.

Either it should be set to 3000 or we should have a setting where the user can update it whatever the limit he/she wants.

same loadBalancerIP reused in both web and master templates

I'm unsure what's supposed to happen if you set the the service.type to LoadBalancer and set the loadBalancerIP.

The templates appear set up to support this, however web-service.yaml and master-service.yaml re-use the same IP address (graylog.service.loadBalancerIP) for both services in this case.

Was the master service supposed to have a separate address, or did I misunderstand the intent of the configuration?

Graylog cluster launches all nodes as master or several nodes as master

Describe the bug
While performing upgrades of the helm chart/graylog version, multiple masters come up instead of one.

Version of Helm and Kubernetes:

Helm Version:

$ helm version
version.BuildInfo{Version:"v3.3.3", GitCommit:"55e3ca022e40fe200fbc855938995f40b2a68ce0", GitTreeState:"clean", GoVersion:"go1.14.9"}

Kubernetes Version:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.9-eks-d1db3c", GitCommit:"d1db3c46e55f95d6a7d3e5578689371318f95ff9", GitTreeState:"clean", BuildDate:"2020-10-20T22:18:07Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

Which version of the chart:
graylog-1.7.10

What happened:
After installing a multiple node cluster, all Graylog nodes come up as masters, with a warning message displayed.

What you expected to happen:
Init script to initialize the first node as master, and all consequent nodes as worker.

How to reproduce it (as minimally and precisely as possible):

image:
    repository: "graylog/graylog:4.0.6"
persistence:
    enabled: true
    accessMode: ReadWriteOnce
    size: "50Gi"
    storageClass: "ebs-sc"
helm -n graylog upgrade --install -f values.yml graylog kongz/graylog --version 1.7.10

Anything else we need to know:
Roll back version 1.7.5 does bring up one master and N workers correctly.

Troubles installing chart with ingress

Hello everyone I'm trying to install graylog chart with ingress enabled and I'm running into:

Error: Ingress.extensions "graylog-web" is invalid: spec: Invalid value: []networking.IngressRule(nil): either 'backend' or 'rules' must be specified

My values file only contains:

graylog:
  ingress:
    enabled: true

I believe it's a minor thing but not sure what I'm missing.

EDIT:

Sorry I must've misunderstood that other things default to something

  ingress:
    enabled: true
    annotations: {}
    labels: {}
    hosts:
      - myhost.com
    extraPaths: 
      - path: /graylog
        backend:
          serviceName: graylog-web
          servicePort: graylog
    tls: []

Configuration for GELF HTTP input

Hello.

I apologize in advance if this is a noob question, I'm fairly new at this.

Long story short, I have managed to deploy Graylog in a self managed K8s cluster, everything works pretty well.
I am facing an issue right now: as per requirements, I need to send logs via HTTP from outside the cluster on port 443.
My idea was using an Ingress that will route the incoming traffic from myIngressHost port 443 to my GELF HTTP on port 12201, let's say.
I tried configuring the Ingress to use a secret and set the endpoint to myGraylogMasterService port 12201. I have started the input in the UI. Alas, I get Connection Refused.
I looked into configuring the input via the helm chart, as it is required, but I only find examples of TCP or UDP inputs. Using those toghether with ingress does not seem (easily) possible.
How should the helm chart be deployed so I can use GELF HTTP input?
Thanks for the responses and sorry again for the noob question.

Hello, thanks, but you need to change the syntax of the ingress as well, this is changed in the new api:

Hello, thanks, but you need to change the syntax of the ingress as well, this is changed in the new api:

Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: [ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown field "serviceName" in io.k8s.api.networking.v1.IngressBackend, ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown field "servicePort" in io.k8s.api.networking.v1.IngressBackend, ValidationError(Ingress.spec.rules[0].http.paths[0]): missing required field "pathType" in io.k8s.api.networking.v1.HTTPIngressPath]

Originally posted by @ws-prive in #63 (comment)

Ingress is not working

Describe the bug

Unable to install graylog with enabling ingress

Version of Helm and Kubernetes:

Helm Version:

$ helm version
version.BuildInfo{Version:"v3.3.4", GitCommit:"a61ce5633af99708171414353ed49547cf05013d", GitTreeState:"clean", GoVersion:"go1.14.9"}

Kubernetes Version:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:10:43Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:41:49Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}

Which version of the chart:
1.7.8

What happened:

$helm install --debug --dry-run -n graylog graylog -f values.yaml .

install.go:172: [debug] Original chart version: ""
install.go:189: [debug] CHART PATH: /root/graylog

Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Ingress.spec.rules[0].http.paths[0]): invalid type for io.k8s.api.networking.v1beta1.HTTPIngressPath: got "string", expected "map"
helm.go:94: [debug] error validating "": error validating data: ValidationError(Ingress.spec.rules[0].http.paths[0]): invalid type for io.k8s.api.networking.v1beta1.HTTPIngressPath: got "string", expected "map"
helm.sh/helm/v3/pkg/kube.scrubValidationError
/home/circleci/helm.sh/helm/pkg/kube/client.go:566
helm.sh/helm/v3/pkg/kube.(*Client).Build
/home/circleci/helm.sh/helm/pkg/kube/client.go:159
helm.sh/helm/v3/pkg/action.(*Install).Run
/home/circleci/helm.sh/helm/pkg/action/install.go:255
main.runInstall
/home/circleci/helm.sh/helm/cmd/helm/install.go:242
main.newInstallCmd.func2
/home/circleci/helm.sh/helm/cmd/helm/install.go:120
github.com/spf13/cobra.(*Command).execute
/go/pkg/mod/github.com/spf13/[email protected]/command.go:842
github.com/spf13/cobra.(*Command).ExecuteC
/go/pkg/mod/github.com/spf13/[email protected]/command.go:950
github.com/spf13/cobra.(*Command).Execute
/go/pkg/mod/github.com/spf13/[email protected]/command.go:887
main.main
/home/circleci/helm.sh/helm/cmd/helm/helm.go:93
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1373
unable to build kubernetes objects from release manifest
helm.sh/helm/v3/pkg/action.(*Install).Run
/home/circleci/helm.sh/helm/pkg/action/install.go:257
main.runInstall
/home/circleci/helm.sh/helm/cmd/helm/install.go:242
main.newInstallCmd.func2
/home/circleci/helm.sh/helm/cmd/helm/install.go:120
github.com/spf13/cobra.(*Command).execute
/go/pkg/mod/github.com/spf13/[email protected]/command.go:842
github.com/spf13/cobra.(*Command).ExecuteC
/go/pkg/mod/github.com/spf13/[email protected]/command.go:950
github.com/spf13/cobra.(*Command).Execute
/go/pkg/mod/github.com/spf13/[email protected]/command.go:887
main.main
/home/circleci/helm.sh/helm/cmd/helm/helm.go:93
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1373

What you expected to happen:
Should install graylog with proper externalUri

How to reproduce it (as minimally and precisely as possible):
Change the value of ingress.enabled, hosts and extraPath

values.yaml (only put values which differ from the defaults)
ingress:
enabled: true
hosts: [graylog-0]
extraPaths: [/graylog]

graylog.ingress.enable true
graylog.ingress.hosts [graylog-0]
graylog.ingress.extraPath[/graylog]
helm install --debug --dry-run -n graylog graylog -f values.yaml .

Anything else we need to know:
If I create graylog Ingress independent, it does not create proper externalUri web interface. I think to get proper externalUri, ingress should be created with graylog helm chart.

API Browser not using externalUri configuration

Describe the bug
A clear and concise description of what the bug is.

Version of Helm and Kubernetes:

Helm Version:

$ helm version
 version.BuildInfo{Version:"v3.5.2", GitCommit:"167aac70832d3a384f65f9745335e9fb40169dc2", GitTreeState:"dirty", GoVersion:"go1.15.7"}```

Kubernetes Version:

```console
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"14f897abdc7b57f0850da68bd5959c9ee14ce2fe", GitTreeState:"clean", BuildDate:"2021-01-22T17:29:38Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}

Which version of the chart:
4.0.2

What happened:

API Browser not using externalUri configuration

What you expected to happen:

If I use externalUri: 20.73.112.187:9000
the api browser should use the url, 20.73.112.187:9000/api/api-browser but is being exposed as
http://graylog-0.graylog.graylog.svc.cluster.local:9000/api/api-browser

Or maybe expose other option to configure the api external uri?

How to reproduce it (as minimally and precisely as possible):

Use the following values.yaml

tags:
  install-mongodb : false
  install-elasticsearch: false
graylog: 
  replicas: 1
  externalUri: 20.73.112.187:9000
  mongodb:
    uri: mongodb://graylog:graylogdev2021!@mongodb:27017/graylog
  elasticsearch:
    hosts: http://elasticsearch-master:9200
    version: 7
  input:
    tcp:
      service:
        type: LoadBalancer
        externalTrafficPolicy: Local
        loadBalancerIP:
      ports:
        - name: gelf1
          port: 12222
        - name: gelf2
          port: 12223
    udp:
      service:
        type: ClusterIP
      ports:
        - name: syslog
          port: 5410

  service:
    type: LoadBalancer
  persistence:
    enabled: true
    storageClass: default-retain

Anything else we need to know:

Graylog 4.2.1 won't start

It seems like Graylog 4.2.1 won't start.

Repro steps:

  1. Helm install with values.graylog.image set to 'graylog/graylog:4.2.1-1-jre11'.

Logs for me show the following:

Current master is 10.230.182.92
Self IP is 10.230.182.92
Downloading https://github.com/graylog-labs/graylog-plugin-metrics-reporter/releases/download/3.0.0/metrics-reporter-prometheus-3.0.0.jar ...
Starting graylog
sh: 1: message_journal_dir: not found
Graylog Home /usr/share/graylog
Graylog Plugin Dir /usr/share/graylog/plugin
Graylog Elasticsearch Version 7
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Unrecognized VM option 'UseParNewGC'
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.

Happy to provide additional info as needed, but, I don't think I changed very much from stock.

Thanks!

[graylog] TLS in "external" Ingress breaks selflinks

Using TLS in Ingress requires us to be able to set https in selflinks, which is done easily by setting externalUri. However, the only way of setting this to use https is to enable internal tls, otherwise it is hardcoded to use http.

{{- if $env.Values.graylog.tls.enabled }}
{{- printf "https://%s" $url }}
{{- else }}
{{- printf "http://%s" $url }}
{{- end -}}

Enabling the tls setting requires us to pass key and cert files, which isn't what we want when moving TLS responsibility to the Ingress.

{{- if .Values.graylog.tls.enabled }}
http_enable_tls = true
http_tls_cert_file = {{ .Values.graylog.tls.certFile }}
http_tls_key_file = {{ .Values.graylog.tls.keyFile }}
{{- end }}

This forces us to add ingress definitions to the Graylog Chart values, making an external definition hard (requiring manual editing of the config map). For more fine-grained functionality, like serving Graylog on a subpage of a root address (e.g. https://example.com/graylog/) that requires us to redefine the main ingress path, this becomes a problem.

Suggestion:
Require setting externalUri.protocol: http | https for declarative approach with foreseeable effect.

MongoDB frequent restarts

Hello.

Sorry to disturb you, maybe someone can help me.
I have deployed the graylog chart, with an external Mongodb, with architecture replicaset.
The problem I have is that MongoDB ends up using a lot of resources, making the whole k8s cluster unstable. Giving the fact that we are limited in that regard, I have chosen to limit the resources consumed by Mongo. Now, I get frequent restarts (5 per day, every time it tries to go over the limit, set to 256Mi).
Is there any way of fixing that? Seems strange, as I only have one replica of Graylog and one of MongoDB running, and it should be pretty light in consuming resources.
Would letting the chart deploy Mongo itself be better? Maybe it has something to do with graylog.persistence.enabled=false (I'm fuzzy on this one, which data is not to be persisted, as I have both Mongo and Elasticsearch persisted separately)?

The command used for Graylog deployment:
helm upgrade --install graylog kongz/graylog --namespace test--set graylog.replicas=1 --set graylog.persistence.enabled=false --set graylog.image.repository=graylog/graylog:3.3 --set tags.install-elasticsearch=false --set tags.install-mongodb=false --set graylog.elasticsearch.hosts=http://elasticsearch-master.test.svc.cluster.local:9200 --set graylog.mongodb.uri=mongodb:// mongodb-mongodb-replicaset.test.svc.cluster.local:27017/graylog?replicaSet=rs0 --set graylog.ingress.enabled=true --set graylog.ingress.port=9000 --set graylog.ingress.hosts.0=graylogtest.org --set graylog.externalUri=graylogtest.orgl --set graylog.input.tcp.service.type=ClusterIP --set graylog.input.tcp.ports[0].name=gelf1 --set graylog.input.tcp.ports[0].port=12201

Thanks for the help!

Dynamic provisioning for standalone

Hi everyone,

I try to use this chart with K8s with nfs dynamic provisioning PV.
I found that for graylog it's ok for dynamic provision PV. what about elasticsearch and mongodb that build in in the chart.
Is it has values for configure the storageclass like graylog?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    šŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. šŸ“ŠšŸ“ˆšŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ā¤ļø Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.