Code Monkey home page Code Monkey logo

charts's Introduction

The Bitnami Library for Kubernetes

Popular applications, provided by Bitnami, ready to launch on Kubernetes using Kubernetes Helm.

Looking to use our applications in production? Try VMware Tanzu Application Catalog, the enterprise edition of Bitnami Application Catalog.

TL;DR

helm install my-release oci://registry-1.docker.io/bitnamicharts/<chart>

Vulnerabilities scanner

Each Helm chart contains one or more containers. Those containers use images provided by Bitnami through its test & release pipeline and whose source code can be found at bitnami/containers.

As part of the container releases, the images are scanned for vulnerabilities, here you can find more info about this topic.

Since the container image is an immutable artifact that is already analyzed, as part of the Helm chart release process we are not looking for vulnerabilities in the containers but running different verifications to ensure the Helm charts work as expected, see the testing strategy defined at TESTING.md.

Before you begin

Prerequisites

  • Kubernetes 1.23+
  • Helm 3.8.0+

Setup a Kubernetes Cluster

The quickest way to set up a Kubernetes cluster to install Bitnami Charts is by following the "Bitnami Get Started" guides for the different services:

For setting up Kubernetes on other cloud platforms or bare-metal servers refer to the Kubernetes getting started guide.

Install Helm

Helm is a tool for managing Kubernetes charts. Charts are packages of pre-configured Kubernetes resources.

To install Helm, refer to the Helm install guide and ensure that the helm binary is in the PATH of your shell.

Using Helm

Once you have installed the Helm client, you can deploy a Bitnami Helm Chart into a Kubernetes cluster.

Please refer to the Quick Start guide if you wish to get running in just a few commands, otherwise, the Using Helm Guide provides detailed instructions on how to use the Helm client to manage packages on your Kubernetes cluster.

Useful Helm Client Commands:

  • Install a chart: helm install my-release oci://registry-1.docker.io/bitnamicharts/<chart>
  • Upgrade your application: helm upgrade my-release oci://registry-1.docker.io/bitnamicharts/<chart>

License

Copyright © 2024 Broadcom. The term "Broadcom" refers to Broadcom Inc. and/or its subsidiaries.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

charts's People

Contributors

alemorcuq avatar andresbono avatar beltran-rubo avatar bitnami-bot avatar carrodher avatar celiagmqrz avatar dani8art avatar dependabot[bot] avatar dgomezleon avatar fmulero avatar javsalgar avatar jbianquetti-nami avatar joancafom avatar jotamartos avatar jouve avatar juan131 avatar kubernetes-bitnami avatar marcosbc avatar mauraza avatar maxnitze avatar mdhont avatar migruiz4 avatar miguelaeh avatar orgads avatar pablogalegoc avatar prydonius avatar rafariossaa avatar superaleks avatar tompizmor avatar vikram-bitnami avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

charts's Issues

MongoDB Chart fails on windows host

mongoDB chart will continue to crash and restart on a windows 10 host with mongodb.persistence.enabled=true (persistence disabled works without issue)
running on docker-for-windows Version 18.06.1-ce-win73 (19507)
with kubernetes enabled

kubectl describe

kubectl describe pod dev-mongodb-6584dd75f5-vc9rp
Name:           dev-mongodb-6584dd75f5-vc9rp
Namespace:      default
Node:           docker-for-desktop/192.168.65.3
Start Time:     Wed, 26 Sep 2018 21:20:55 -0700
Labels:         app=mongodb
                pod-template-hash=2140883191
                release=dev
Annotations:    <none>
Status:         Running
IP:             10.1.0.62
Controlled By:  ReplicaSet/dev-mongodb-6584dd75f5
Containers:
  dev-mongodb:
    Container ID:   docker://4e8bdefbc5d8727d82bba976e5552ae4dd5b92e1bcee6c13c8f985aa12b5f1ab
    Image:          docker.io/bitnami/mongodb:3.6
    Image ID:       docker-pullable://bitnami/mongodb@sha256:a3b85168bcc94a329b96729683edd3ec731a8aac902c664ee6a3aeba0b5f5293
    Port:           27017/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    100
      Started:      Wed, 26 Sep 2018 21:21:50 -0700
      Finished:     Wed, 26 Sep 2018 21:21:59 -0700
    Ready:          False
    Restart Count:  1
    Liveness:       exec [mongo --eval db.adminCommand('ping')] delay=30s timeout=5s period=10s #success=1 #failure=6
    Readiness:      exec [mongo --eval db.adminCommand('ping')] delay=5s timeout=5s period=10s #success=1 #failure=6
    Environment:
      MONGODB_ROOT_PASSWORD:  <set to the key 'mongodb-root-password' in secret 'dev-mongodb'>  Optional: false
      MONGODB_USERNAME:
      MONGODB_DATABASE:
      MONGODB_ENABLE_IPV6:    yes
      MONGODB_EXTRA_FLAGS:    --smallfiles --logpath=/dev/null
    Mounts:
      /bitnami/mongodb from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-phmll (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          False
  PodScheduled   True
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  dev-mongodb
    ReadOnly:   false
  default-token-phmll:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-phmll
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                 Age   From                         Message
  ----     ------                 ----  ----                         -------
  Normal   Scheduled              1m    default-scheduler            Successfully assigned dev-mongodb-6584dd75f5-vc9rp to docker-for-desktop
  Normal   SuccessfulMountVolume  1m    kubelet, docker-for-desktop  MountVolume.SetUp succeeded for volume "pvc-b8423bfb-c20c-11e8-bb65-00155d016f01"
  Normal   SuccessfulMountVolume  1m    kubelet, docker-for-desktop  MountVolume.SetUp succeeded for volume "default-token-phmll"
  Warning  Unhealthy              44s   kubelet, docker-for-desktop  Readiness probe failed: MongoDB shell version v3.6.8
connecting to: mongodb://127.0.0.1:27017
2018-09-27T04:21:23.949+0000 I NETWORK  [thread1] Socket recv() Connection reset by peer 127.0.0.1:27017
2018-09-27T04:21:24.042+0000 I NETWORK  [thread1] SocketException: remote: (NONE):0 error: SocketException socket exception [RECV_ERROR] server [127.0.0.1:27017]
2018-09-27T04:21:24.058+0000 E QUERY    [thread1] Error: network error while attempting to run command 'isMaster' on host '127.0.0.1:27017'  :
connect@src/mongo/shell/mongo.js:251:13
@(connect):1:6
exception: connect failed
  Warning  Unhealthy  34s  kubelet, docker-for-desktop  Readiness probe failed: MongoDB shell version v3.6.8
connecting to: mongodb://127.0.0.1:27017
2018-09-27T04:21:33.631+0000 W NETWORK  [thread1] Failed to connect to 127.0.0.1:27017, in(checking socket for error after poll), reason: Connection refused
2018-09-27T04:21:34.401+0000 E QUERY    [thread1] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed :
connect@src/mongo/shell/mongo.js:251:13
@(connect):1:6
exception: connect failed
  Warning  Unhealthy  28s  kubelet, docker-for-desktop  Liveness probe failed: MongoDB shell version v3.6.8
connecting to: mongodb://127.0.0.1:27017
2018-09-27T04:21:40.145+0000 W NETWORK  [thread1] Failed to connect to 127.0.0.1:27017, in(checking socket for error after poll), reason: Connection refused
2018-09-27T04:21:40.145+0000 E QUERY    [thread1] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed :
connect@src/mongo/shell/mongo.js:251:13
@(connect):1:6
exception: connect failed

mongodb logs

�[0m
�[0m�[1mWelcome to the Bitnami mongodb container�[0m
�[0mSubscribe to project updates by watching �[1mhttps://github.com/bitnami/bitnami-docker-mongodb�[0m
�[0mSubmit issues and feature requests at �[1mhttps://github.com/bitnami/bitnami-docker-mongodb/issues�[0m
�[0m
nami    INFO  Initializing mongodb
mongodb INFO  ==> Deploying MongoDB with persisted data...
mongodb INFO  ==> No injected configuration files found. Creating default config files...
mongodb INFO
mongodb INFO  ########################################################################
mongodb INFO   Installation parameters for mongodb:
mongodb INFO     Persisted data and properties have been restored.
mongodb INFO     Any input specified will not take effect.
mongodb INFO   This installation requires no credentials.
mongodb INFO  ########################################################################
mongodb INFO
nami    INFO  mongodb successfully initialized
�[0m�[38;5;2mINFO �[0m ==> Starting mongodb...
�[0m�[38;5;2mINFO �[0m ==> Starting mongod...
2018-09-27T04:34:00.223+0000 I CONTROL  [initandlisten] MongoDB starting : pid=34 port=27017 dbpath=/opt/bitnami/mongodb/data/db 64-bit host=dev-mongodb-7b9d7bbdd9-bglkr
2018-09-27T04:34:00.224+0000 I CONTROL  [initandlisten] db version v3.6.8
2018-09-27T04:34:00.224+0000 I CONTROL  [initandlisten] git version: 6bc9ed599c3fa164703346a22bad17e33fa913e4
2018-09-27T04:34:00.224+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.1.0f  25 May 2017
2018-09-27T04:34:00.224+0000 I CONTROL  [initandlisten] allocator: tcmalloc
2018-09-27T04:34:00.224+0000 I CONTROL  [initandlisten] modules: none
2018-09-27T04:34:00.224+0000 I CONTROL  [initandlisten] build environment:
2018-09-27T04:34:00.224+0000 I CONTROL  [initandlisten]     distmod: debian92
2018-09-27T04:34:00.224+0000 I CONTROL  [initandlisten]     distarch: x86_64
2018-09-27T04:34:00.224+0000 I CONTROL  [initandlisten]     target_arch: x86_64
2018-09-27T04:34:00.224+0000 I CONTROL  [initandlisten] options: { config: "/opt/bitnami/mongodb/conf/mongodb.conf", net: { bindIpAll: true, ipv6: true, port: 27017, unixDomainSocket: { enabled: true, pathPrefix: "/opt/bitnami/mongodb/tmp" } }, processManagement: { fork: false, pidFilePath: "/opt/bitnami/mongodb/tmp/mongodb.pid" }, security: { authorization: "disabled" }, setParameter: { enableLocalhostAuthBypass: "true" }, storage: { dbPath: "/opt/bitnami/mongodb/data/db", journal: { enabled: true } }, systemLog: { destination: "file", logAppend: true, logRotate: "reopen", path: true } }
2018-09-27T04:34:00.234+0000 I -        [initandlisten] Detected data files in /opt/bitnami/mongodb/data/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2018-09-27T04:34:00.237+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=478M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),cache_cursors=false,compatibility=(release="3.0",require_max="3.0"),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
2018-09-27T04:34:00.732+0000 E STORAGE  [initandlisten] WiredTiger error (1) [1538022840:732507][34:0x7fdef0bb8580], file:WiredTiger.wt, connection: /opt/bitnami/mongodb/data/db/WiredTiger.wt: handle-open: open: Operation not permitted
2018-09-27T04:34:00.733+0000 E -        [initandlisten] Assertion: 28595:1: Operation not permitted src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp 421
2018-09-27T04:34:00.856+0000 I STORAGE  [initandlisten] exception in initAndListen: Location28595: 1: Operation not permitted, terminating
2018-09-27T04:34:00.857+0000 I NETWORK  [initandlisten] shutdown: going to close listening sockets...
2018-09-27T04:34:00.857+0000 I NETWORK  [initandlisten] removing socket file: /opt/bitnami/mongodb/tmp/mongodb-27017.sock
2018-09-27T04:34:00.857+0000 I CONTROL  [initandlisten] now exiting
2018-09-27T04:34:00.857+0000 I CONTROL  [initandlisten] shutting down with code:100

[bitnami/mysql] Slave data consistency

Just installed latest chart version and I discovered that on the deployment of 1 master + 2 slaves, (I login to the slave service with Adminer or PHPMyAdmin) slaves do not have the same amount of data. Same can be seen when entering pods and listing /bitnami/mysql/data directory,

First slave has all the data, second slave has just some.

Seems that data doesn't get replicated consistently to slaves for some reason.
Can somebody test and confirm the issue? This is really critical issue.

Some upstream charts are broken due to old templates

The logic we have that syncs upstream Bitnami charts (developed out of https://github.com/helm/charts) to the upstream folder in this repo appears to only sync new changes and not delete files that are no longer used.

I found that the Redis chart is affected by this, a helm install bitnami/redis will fail with the following error:

render error in \"redis/templates/svc.yaml\": template: redis/templates/svc.yaml:11:14: executing \"redis/templates/svc.yaml\" at \u003c.Values.service.anno...\u003e: can't evaluate field annotations in type interface {}

When looking for this template in the upstream repo, I found that templates/svc.yaml doesn't exist (removed in https://github.com/helm/charts/pull/4662/files), however the Redis in this repo still has the svc.yaml and deployment.yaml that got removed upstream (https://github.com/bitnami/charts/tree/master/upstreamed/redis/templates). This results in a currently broken chart.

We need to fix the sync logic to ensure we delete any files removed upstream (e.g. rsync with --delete option), and we need to manually resync every chart.

Timeout when trying to connect to Rabbitmq service

Trying to connect using the Bunny client for ruby:

> c = Bunny.new "amqp://user:bitnami@peeking-scorpion-rabbitm:15672"
> c.start
E, [2016-08-09T23:18:08.991953 #21] ERROR -- #<Bunny::Session:0x7fe7520d6a10 user@peeking-scorpion-rabbitm:15672, vhost=/, addresses=[peeking-scorpion-rabbitm:15672]>: Got an exception when receiving data: IO timeout when reading 7 bytes (Timeout::Error)

wordpress chart pod logs

The pod logs give me awesome information. These installation parameters however are different from the ones that I set. Should they match my input in values.yaml?

wordpre INFO  ########################################################################
wordpre INFO   Installation parameters for wordpress:
wordpre INFO     First Name: FirstName
wordpre INFO     Last Name: LastName
wordpre INFO     Username: user
wordpre INFO     Password: **********
wordpre INFO     Email: [email protected]
wordpre INFO     Blog Name: User's Blog!
wordpre INFO   (Passwords are not shown for security reasons)
wordpre INFO  ########################################################################

mariadb chart naming convention

$ helm install mariadb-cluster/
kindly-bobcat

$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
gangly-gorilla-wildfly 10.227.243.59 146.148.61.7 80/TCP,9990/TCP 6m
hazy-anaconda-apache 10.227.244.60 104.197.103.69 80/TCP,443/TCP 16m
kindly-bobcat-master 10.227.247.187 3306/TCP 20s
kindly-bobcat-slave 10.227.255.50 3306/TCP 20s
kubernetes 10.227.240.1 443/TCP 1h

Names should reflect the application launched:
e.g. kindly-bobcat-mariadb-master, kindly-bobcat-mariadb-slave

Default values representation in values.yaml

We decided to comment out the optional keys in the values.yaml, but I do not think that this should affect to the default values.

For example, in the code below is not obvious to know or spot that the commented out jenkinsUser/password contain the default values.

We should remove the comment on those lines since are the ones that are being applied to the installed chart.

Summarizing: comment out optional values are fine except if they add some kind of restriction as form of default values that the user needs to know in order to use the application.

## Bitnami Jenkins image version                                                                                                                                                                
## ref: https://hub.docker.com/r/bitnami/jenkins/tags/                                                                                                                                          
##                                                                                                                                                                                              
imageTag: 2.17-r0                                                                                                                                                                               

## Specify a imagePullPolicy                                                                                                                                                                    
## Defaults to 'Always' if imageTag is 'latest', else set to 'IfNotPresent'                                                                                                                     
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images                                                                                                                         
##                                                                                                                                                                                              
# imagePullPolicy:                                                                                                                                                                              

## User of the application                                                                                                                                                                      
## ref: https://github.com/bitnami/bitnami-docker-jenkins#configuration                                                                                                                         
##                                                                                                                                                                                              
# jenkinsUser: user                                                                                                                                                                             

## Application password                                                                                                                                                                         
## ref: https://github.com/bitnami/bitnami-docker-jenkins#configuration                                                                                                                         
##                                                                                                                                                                                              
# jenkinsPassword: bitnami  

MySQL and Elasticsearch Templates Conflict

The MySQL and Elasticsearch templates have conflicting templates. They both define a "master.fullname" template which take a different number of arguments to printf. This is an issue if you're using the charts as subcharts since elasticsearch's template expects 3 arguments to printf while mysql only expects 2. The elasticsearch one seemed to take precedence for the chart I was building yielding invalid names for the MySQL yaml files.

The templates should have namespaces (such as the PostgreSQL templates).

Pod fails to start when elasticsearch_custom.yml is mounted with read-only permissions

ConfigMap volumes have been read-only since Kubernetes 1.9.4.
kubernetes/kubernetes#58720

Then Pod outputs the following error and exits.

Welcome to the Bitnami elasticsearch container
Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-elasticsearch
Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-elasticsearch/issues
Send us your feedback at [email protected]

nami    INFO  Initializing elasticsearch
Error executing 'postInstallation': EROFS: read-only file system, chown '/bitnami/elasticsearch/conf/elasticsearch_custom.yml'

Is there any solution to this?

gh-pages for bitnami incubator chart repository

It would be great if the incubator charts were available as a maintained public repository that users could add as a dependency. I think it would help increase adoption and move them to stable status. Here are the steps to do it.

For Maintainers

Setup gh-pages

Add gh-pages branch

git clone --depth=1 [email protected]:bitnami/charts.git 
cd charts
git branch gh-pages && git checkout gh-pages
rm -fr * && rm .gitignore
git commit -am 'initial gh-pages commit' && git push --set-upstream origin gh-pages

From within GitHub's settings, find the section on "GitHub Pages". From the source drop-down, select gh-pages and save.

To Package

For each chart:

git clone --depth=1 https://github.com/bitnami/charts.git --branch gh-pages tmp_chart
helm package -d ./tmp_chart/ . 
helm repo index tmp_chart --url https://bitnami.github.io/charts
git -C tmp_chart add *.tgz index.yaml &&  git -C tmp_chart commit -am 'new package xyz' && git -C tmp_chart push
rm -fr tmp_chart

For Users

Add Bitnami Repo

First add the repo:

helm repo add bitnami-incubator https://bitnami.github.io/charts

Add Dependency

Create requirements.yaml file with the following content:

#requirements.yaml
dependencies:
- name: tomcat
  version: 0.4.11
  repository: https://bitnami.github.io/charts
  alias: bitnami-tomcat

Override values

At the root level of values.yaml add the name of the alias from the dependency bitnami-tomcat as a node.

#values.yaml
[...]
bitnami-tomcat:
  tomcatUsername: override-user
[...]

mysql chart: crashloopbackoff after node restart

I installed the mysql chart in minikube (win64) with the following YAML settings:

#bitnami mysql:
service:
    type: NodePort
    port: 3306
master:
  persistence:
    enabled: false
slave:
  persistence:
    enabled: false

Chart starts up fine.

I stop and start minikube, and the chart doesn't run anymore. Both master and slave pods are getting a CrashLoopBackOff (Terminated: Error).

The master pod has the following in the log:

←[0m
←[0m←[1mWelcome to the Bitnami mysql container←[0m
←[0mSubscribe to project updates by watching ←[1mhttps://github.com/bitnami/bitnami-docker-mysql←[0m
←[0mSubmit issues and feature requests at ←[1mhttps://github.com/bitnami/bitnami-docker-mysql/issues←[0m
←[0m
nami    INFO  Initializing mysql
mysql   INFO
mysql   INFO  ########################################################################
mysql   INFO   Installation parameters for mysql:
mysql   INFO     Persisted data and properties have been restored.
mysql   INFO     Any input specified will not take effect.
mysql   INFO   This installation requires no credentials.
mysql   INFO  ########################################################################
mysql   INFO
nami    INFO  mysql successfully initialized
←[0m←[38;5;2mINFO ←[0m ==> Starting mysql...
←[0m←[38;5;2mINFO ←[0m ==> Starting mysqld_safe...
2018-08-03T13:27:55.114541Z mysqld_safe error: log-error set to '/opt/bitnami/mysql/logs/mysqld.log', however file don't exists. Create writable for user 'mysql'.

I think the root password is also different than on the previous run. I feel like this might be strange behavior? But I don't know if it's related to the crashing issue.

Wildfly is broken in helm-2.0 branch

$ helm install wildfly
Error: parse error in "wildfly/templates/wildfly-secrets.yaml": template: wildfly/templates/wildfly-secrets.yaml:11: unexpected "{" in command

Fix below.

$ git diff
diff --git a/wildfly/templates/wildfly-secrets.yaml b/wildfly/templates/wildfly-secrets.yaml
index 2223e8b..a369f9a 100644
--- a/wildfly/templates/wildfly-secrets.yaml
+++ b/wildfly/templates/wildfly-secrets.yaml
@@ -8,4 +8,4 @@ metadata:
heritage: bitnami
type: Opaque
data:

  • wildfly-password: {{ {{ default "" .Values.wildflyPassword | b64enc | quote }}
  • wildfly-password: {{ default "" .Values.wildflyPassword | b64enc | quote }}

consul ui ingress resource only works if set to /

I tried many many combinations of annotations but basically consul-ui always redirects to to /ui/ so the only way to get the ingress to work (that I have found) is to math path / (wont work for example if you make it /consul/)

This is due to a long standing issue with consul, seems that was fixed but broke again with the new UI - hashicorp/consul#1930

Also it might be possible to work around this added a custom config snippet (https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#configuration-snippet) but I dont know enough about nginx to suggest a solution

Posting in phpbb broken

$ helm install phpbb/
Then login, select "Your first forum"
Then "New Topic"
Subject: Test post
Submit

User is prompted to login again, cannot post.

Can't connect to mongo in helm-2.0

$ kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
enervated-koala-mongodb 10.227.250.19 27017/TCP 3m

$ kubectl run ouch --tty -i --rm --image bitnami/mongodb --command -- /bin/bash

mongo 10.227.250.19

MongoDB shell version: 3.2.7
connecting to: 10.227.250.19/test

2016-08-09T22:40:37.598+0000 W NETWORK [thread1] Failed to connect to 10.227.250.19:27017 after 5000 milliseconds, giving up.
2016-08-09T22:40:37.598+0000 E QUERY [thread1] Error: couldn't connect to server 10.227.250.19:27017, connection attempt failed :
connect@src/mongo/shell/mongo.js:231:14
@(connect):1:6

exception: connect failed

[bitnami/postgresql] Config parameter persistence.accessMode is ignored in Helm chart

Hi,

I tried to install bitnami/postgresql on Azure using the helm charts:

helm install --name bitnami-postgres \
  --set persistence.storageClass=azurefile,persistence.accessMode=ReadWriteMany \
    bitnami/postgresql

The PVC will be created with the correct storage class, but with accessMode ReadWriteOnce. Therefore the pods crashed with a permission error issue:

EACCES permission denied, mkdir "/bitnami/postgresql/data"

Is this parameter ignored?

Thanks for your help in advance!

Cheers

wordpress chart blog name isn't being set

I set the wordpressBlogName to "Michelle's Blog". It gets set in the manifest after the values have been computed, however the actual blog site is still titled "Users's Blog" and I was expecting it to to be "Michelle's Blog"

[bitnami/postgres] svc name truncates to 24

can someone confirm if this is still necessary?

deploying the chart as a subchart means that svc name gets completely foobarred as it will pick up the release name, tag on postgres then truncate to 24 characters.

e.g.
hhs-feature-devops-64-en when it was supposed to be hhs-feature-devops-64-env-postgres.

The only workaround I have (which is a horrible one) is all my charts have to implement the same truncation.

original issue: kubernetes/kubernetes#25041

is our `index.yaml` correct

It seems to me the index.yaml is incorrect, with a wrong urls field. It lacks the actualy URL and only references the tarball.

wget https://charts.bitnami.com/incubator/index.yaml
more index.yaml 
apiVersion: v1
entries:
  apache:
  - created: 2017-11-30T18:40:04.582855774Z
    description: Chart for Apache HTTP Server
    digest: 190a7bfc84d8b5bcee1ae76f6fb12226fda1d7bcefb171989a909beb67271c42
    engine: gotpl
    home: https://httpd.apache.org
    keywords:
    - apache
    - http
    - https
    - www
    - web
    - reverse proxy
    maintainers:
    - email: [email protected]
      name: Bitnami
    name: apache
    sources:
    - https://github.com/bitnami/bitnami-docker-apache
    urls:
    - apache-0.3.7.tgz
    version: 0.3.7

You end up being able to search the index, but you cannot install the charts.

cc/ @sameersbn @prydonius

Node Chart: refactorize to initialize thru init-containers

Right now, the node chart uses init-containers to just clone the repo, and then it does the rest of the initialization during the final container boot.

The best way to do this is throw all initialization steps into init-containers, letting the final container boots by only executing one command: npm start

PVC not working on Minikube 0.9.0

Trying to run the Redmine chart which comes with persistence enabled by default I get:


kubectl get pods
NAME                                   READY     STATUS    RESTARTS   AGE
giddy-skunk-mariadb-4195510591-cwdid   0/1       Pending   0          31s
giddy-skunk-redmine-3930201193-68jv7   0/1       Pending   0          31s
migmartri ~/work/bitnami/charts/redmine master $ kubectl get pvc
NAME                  STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
giddy-skunk-mariadb   Pending                                      3m
giddy-skunk-redmine   Pending                                      3m

Provisioner plugin not found.

migmartri ~/work/bitnami/charts/redmine master $ kubectl describe pvc giddy-skunk-mariadb
Name:       giddy-skunk-mariadb
Namespace:  default
Status:     Pending
Volume:     
Labels:     <none>
Capacity:   
Access Modes:   
Events:
  FirstSeen LastSeen    Count   From                SubobjectPath   Type        Reason          Message
  --------- --------    -----   ----                -------------   --------    ------          -------
  3m        11s     16  {persistentvolume-controller }          Warning     ProvisioningFailed  No provisioner plugin found for the claim!

And this is the error message shown in the Pod

kubectl describe pods giddy-skunk-redmine-3930201193-68jv7
Name:       giddy-skunk-redmine-3930201193-68jv7
Namespace:  default
Node:       /
Labels:     app=giddy-skunk-redmine
        chart=redmine-redmine
        heritage=Tiller
        pod-template-hash=3930201193
        release=giddy-skunk
Status:     Pending
IP:     
Controllers:    ReplicaSet/giddy-skunk-redmine-3930201193
Containers:
  giddy-skunk-redmine:
    Image:  bitnami/redmine:3.3.0-r2
...
...
Events:
  FirstSeen LastSeen    Count   From            SubobjectPath   Type        Reason          Message
  --------- --------    -----   ----            -------------   --------    ------          -------
  6m        55s     22  {default-scheduler }            Warning     FailedScheduling    PersistentVolumeClaim is not bound: "giddy-skunk-redmine"

Using Minikube 0.9.0 in a brand new VM and cluster.

minikube version
minikube version: v0.9.0

MySQL chart existing volume claim flag missing

Just noticed that even though at the end of the README it's said that existing volume claim can be used, I do not see master.persistence.existingClaim config existing there in a same way as in MariaDB chart. Is this the issue with chart description or this functionality hasn't been implemented yet?

tomcat missing help files (host manager)

Description: When clicking on this link: http:///docs/html-host-manager-howto.html?org.apache.catalina.filters.CSRF_NONCE=F3DAEEADB91D1517391ECF6F8ECD0782 the resource is not available (404).

Steps to Reproduce:

  1. Login to Tomcat through External-IP
  2. Click on "Host Manager"
  3. Click on "HTML Host Manager Help (TODO)" or "Host Manager Help (TODO)"
    See error

[bitnami/postgresql] feature: override configmaps

Use case

There is a chart that requires bitnami/postgres as a dependency. I would like to pass in initialization scripts and override postgresql.conf. The directory structure is:

mychart
|- charts
|-- postgresql-2.1.0.tgz 

Problem

The only way I found to do this was clone the chart into the charts folder and insert the files directly:

mychart
|- charts
|-- postgresql
|--- files
|---- postgresql.conf
|---- docker-entrypoint-initdb.d
|----- setup.sql

This, however, breaks the helm workflow for chart management

Solution

A potential solution is to override the init configmaps with my own, which would include files/templates taken from the top level. Thoughts?

End to end chart example

As a user I want to deploy a Helm chart that will

  1. Provision a Service broker instance
  2. Deploy a Application
  3. Deploy a kubeless function

And link all of the deployed components together

Kafka chart describes Postgres

"Kafka is an object-relational database management system (ORDBMS) with an emphasis on extensibility and on standards-compliance."

I think this headline is more appropriate for Postgres?

ETCD statefulset failing to scale - error #0: dial tcp: lookup lumpy-tarsier-etcd-0 on 10.96.0.10:53: server misbehaving

When trying to scale the statefulset to 2 from 1 the container logs reports:

==> The ID of the host is 1
==> Creating data dir...
==> Adding member to existing cluster.
==> Adding new member
client: etcd cluster is unavailable or misconfigured; error #0: dial tcp: lookup lumpy-tarsier-etcd-0 on 10.96.0.10:53: server misbehaving

10.96.0.10:53 is my Kube DNS server

When I do an nslookup with lumpy-tarsier-etcd-0 it doesn't resolve but when I do an nslookup with lumpy-tarsier-etcd-0.lumpy-tarsier-etcd-headless.default.svc.cluster.local it works fine.

Using sprig functions in templates

I am super excited to see this repository!

Have you guys looked at the Sprig functions that are exposed to the template engine? I was looking at the RedMine chart, and thinking it might make sense to do something like {{ default 22 .smtpPort}} or things like that.

I think Helm might be a version or so back from the Sprig head. But most of the functions are the same between versions.

Thanks!

Kafka Internode communication breaks inspite of providing keyStore & Truststore files.

I'm trying to use Kafka with a SSL enabled and generated the truststore files and keystore files using my kubernetes domain names (kafka-{0,1,2,3,4}.kafka-headless.namespcae.svc.cluster.local) as subjective alternative names. However the inter broker communication fails on the following exception.

org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem
	at sun.security.ssl.Handshaker.checkThrown(Handshaker.java:1529)
	at sun.security.ssl.SSLEngineImpl.checkTaskThrown(SSLEngineImpl.java:535)
	at sun.security.ssl.SSLEngineImpl.writeAppRecord(SSLEngineImpl.java:1214)
	at sun.security.ssl.SSLEngineImpl.wrap(SSLEngineImpl.java:1186)
	at javax.net.ssl.SSLEngine.wrap(SSLEngine.java:469)
	at org.apache.kafka.common.network.SslTransportLayer.handshakeWrap(SslTransportLayer.java:439)
	at org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:304)
	at org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:258)
	at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:125)
	at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:487)
	at org.apache.kafka.common.network.Selector.poll(Selector.java:425)
	at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:510)
	at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:239)
	at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:163)
	at java.lang.Thread.run(Thread.java:748)
Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem
	at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
	at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1728)
	at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:330)
	at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:322)
	at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1614)
	at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216)
	at sun.security.ssl.Handshaker.processLoop(Handshaker.java:1052)
	at sun.security.ssl.Handshaker$1.run(Handshaker.java:992)
	at sun.security.ssl.Handshaker$1.run(Handshaker.java:989)
	at java.security.AccessController.doPrivileged(Native Method)
	at sun.security.ssl.Handshaker$DelegatedTask.run(Handshaker.java:1467)
	at org.apache.kafka.common.network.SslTransportLayer.runDelegatedTasks(SslTransportLayer.java:393)
	at org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:473)
	at org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:331)
	... 8 more
Caused by: java.security.cert.CertificateException: No name matching kafka-kafka-2.kafka-kafka-headless.kafka.svc.cluster.local found
	at sun.security.util.HostnameChecker.matchDNS(HostnameChecker.java:231)
	at sun.security.util.HostnameChecker.match(HostnameChecker.java:96)
	at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:455)
	at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:436)
	at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:252)
	at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:136)
	at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1601)
	... 17 more
[2018-09-04 16:46:34,085] ERROR [Producer clientId=console-producer] Connection to node -1 failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
>[2018-09-04 16:46:34,192] ERROR [Producer clientId=console-producer] Connection to node -1 failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)```

Zookeeper Statefulsets does not work with custom namespace.

Zookeeper statefulsets does not work with custom namespaces . Service Discovery fails.

In the statefulset.yaml , the following needs to be rectified

  • name: Zookeeper
    value: zk-zookeeper-0.zk-zookeeper-headless.default.svc.cluster.local:2888:3888

  • name: Zookeeper
    value: zk-zookeeper-0.zk-zookeeper-headless.my_namespace.svc.cluster.local:2888:3888

This might be also breaking Kafka when installing with a custom namespace.

Kafka

Hi Is there a provision to chnage the number of zookeeper nodes?

etcd: endpoints error

ETCDCTL_ENDPOINTS="{{ $etcdClientProtocol }}://{{ $etcdFullname }}-0.{{ $etcdHeadlessServiceName }}.default.svc.cluster.local:{{ $clientPort }}

should be

ETCDCTL_ENDPOINTS="{{ $etcdClientProtocol }}://{{ $etcdFullname }}-0.{{ $etcdHeadlessServiceName }}.{{ .Release.Namespace }}.svc.cluster.local:.......

Drupal outdated

Running the current version of Drupal I get.

"There is a security update available for your version of Drupal. To ensure the security of your server, you should update immediately! See the available updates page for more information and to install your missing updates."

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.