jp-gouin / helm-openldap Goto Github PK
View Code? Open in Web Editor NEWHelm chart of Openldap in High availability with multi-master replication and PhpLdapAdmin and Ltb-Passwd
License: Apache License 2.0
Helm chart of Openldap in High availability with multi-master replication and PhpLdapAdmin and Ltb-Passwd
License: Apache License 2.0
Add backup&restore capabilities with a cronjob
If I am reading the chart correctly, the StatefulSet
appears to set the certificate directory described in the docker image to the data
volume here.
However, I am having difficulty finding where the TLS and CA secrets described in the values.yml get copied into the data
volume.
Where do the TLS and CA secrets get copied into the data
volume?
Is your feature request related to a problem? Please describe.
When using this helm chart to setup a multi-master replication ldap setup, data is synced in intervals accoring to replication.interval
. When using this setup in combination with other tooling (e.g. Keycloak), data written is expected to be instantly available, which can not be guarantied as the write and read query doesn't have to go to the same pod.
Describe the solution you'd like
According to this post it seems possible to configure Session Affinity
.
Describe alternatives you've considered
Running LDAP with a single replica, rejecting the purpose of the replication functionality.
Hello,
I want use longhorn for persistantdata, but i don't find where i can set existingClaim.
Am I missing a few things ?
add in the chart the value to be able to prioritize the restart of openldap
https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/
Describe the bug
After deploying with the provided Helm chart, the two OpenLDAP pods (openldap-0
and openldap-1
) fail with the stated error.
The values.yaml
file used:
global:
imageRegistry: ""
imagePullSecrets: []
storageClass: "longhorn"
ldapDomain: "{{ traefik_domain }}"
adminPassword: Not@SecurePassw0rd
configPassword: Not@SecurePassw0rd
clusterDomain: "{{ traefik_domain }}"
image:
repository: osixia/openldap
tag: 1.5.0
pullPolicy: Always
pullSecrets: []
logLevel: debug
customTLS:
enabled: false
service:
annotations: {}
ldapPort: 389
sslLdapPort: 636
externalIPs: []
type: ClusterIP
sessionAffinity: None
env:
LDAP_LOG_LEVEL: "256"
LDAP_ORGANISATION: "Moerman"
LDAP_READONLY_USER: "false"
LDAP_READONLY_USER_USERNAME: "readonly"
LDAP_READONLY_USER_PASSWORD: "readonly"
LDAP_RFC2307BIS_SCHEMA: "false"
LDAP_BACKEND: "mdb"
LDAP_TLS: "true"
LDAP_TLS_CRT_FILENAME: "tls.crt"
LDAP_TLS_KEY_FILENAME: "tls.key"
LDAP_TLS_DH_PARAM_FILENAME: "dhparam.pem"
LDAP_TLS_CA_CRT_FILENAME: "ca.crt"
LDAP_TLS_ENFORCE: "false"
LDAP_TLS_REQCERT: "never"
KEEP_EXISTING_CONFIG: "false"
LDAP_REMOVE_CONFIG_AFTER_SETUP: "true"
LDAP_SSL_HELPER_PREFIX: "ldap"
LDAP_TLS_VERIFY_CLIENT: "never"
LDAP_TLS_PROTOCOL_MIN: "3.0"
LDAP_TLS_CIPHER_SUITE: "NORMAL"
pdb:
enabled: false
minAvailable: 1
maxUnavailable: ""
customFileSets: []
replication:
enabled: true
clusterName: "{{ traefik_domain }}"
retry: 60
timeout: 1
interval: 00:00:00:10
starttls: "critical"
tls_reqcert: "never"
persistence:
enabled: true
storageClass: "longhorn"
accessModes:
- ReadWriteOnce
size: 1Gi
podSecurityContext:
enabled: true
fsGroup: 1001
containerSecurityContext:
enabled: false
runAsUser: 1001
runAsNonRoot: true
serviceAccount:
create: true
name: ""
volumePermissions:
enabled: false
image:
registry: docker.io
repository: bitnami/bitnami-shell
tag: 10-debian-10
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
## e.g:
## pullSecrets:
## - myRegistryKeySecretName
##
pullSecrets: []
command: {}
resources:
limits: {}
requests: {}
containerSecurityContext:
runAsUser: 0
To Reproduce
Steps to reproduce the behavior:
Is your feature request related to a problem? Please describe.
When I try to use the latest docker image docker.io/bitnami/openldap for the installation, the pod fails to come up with an error in the log.
Describe the solution you'd like
Some comments in the README or in the values.yml or an example how the installation needs to be configured to be able to use the bitnami image.
Describe alternatives you've considered
Describe why the bitnami image can't be supported.
Additional context
The bitnami image seems to be based on a more recent openldap version than the osixia image.
Hi,
is there a reason why affinity is missing/is not configurable?
BR
Thanks for your work on this, while baseing on a proven solid docker container.
However: It would be nice if you could add a git workflow to automatically create a helm repo and push new versions there.
Also: please update the readme, because it still point to the old setup instructions for upstream.
Describe the bug
When installing the chart with the default values, an error is shown when describing the openldap stateful set which suggests a secret is missing and no openldap pods are created.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Three ready and functional openldap pods resulting in a working deployment of OpenLDAP.
Screenshots
Warning FailedCreate 6s (x6 over 57s) statefulset-controller create Pod openldap-0 in StatefulSet openldap failed error: Pod "openldap-0" is invalid: [spec.volumes[1].secret.secretName: Required value, spec.initContainers[0].volumeMounts[1].name: Not found: "secret-certs"]
On deploy, servername is set as:
openldap-openldap-stack-ha.my-namespace.svc.cluster.local:389
How can I set servername to:
openldap.my-namespace.svc.cluster.local:389
see #14
I was working with customized values.yaml
and was running into issue, so I tried plain defaults and recieved same error:
*** INFO | 2021-05-05 19:43:14 | CONTAINER_LOG_LEVEL = 3 (info)
*** INFO | 2021-05-05 19:43:14 | Search service in CONTAINER_SERVICE_DIR = /container/service :
*** INFO | 2021-05-05 19:43:14 | link /container/service/:ssl-tools/startup.sh to /container/run/startup/:ssl-tools
*** INFO | 2021-05-05 19:43:14 | link /container/service/slapd/startup.sh to /container/run/startup/slapd
*** INFO | 2021-05-05 19:43:14 | link /container/service/slapd/process.sh to /container/run/process/slapd/run
*** INFO | 2021-05-05 19:43:14 | Environment files will be proccessed in this order :
Caution: previously defined variables will not be overriden.
/container/environment/99-default/default.startup.yaml
/container/environment/99-default/default.yaml
To see how this files are processed and environment variables values,
run this container with '--loglevel debug'
*** INFO | 2021-05-05 19:43:14 | Running /container/run/startup/:ssl-tools...
*** INFO | 2021-05-05 19:43:14 | Running /container/run/startup/slapd...
*** INFO | 2021-05-05 19:43:14 | openldap user and group adjustments
*** INFO | 2021-05-05 19:43:14 | get current openldap uid/gid info inside container
*** INFO | 2021-05-05 19:43:14 | -------------------------------------
*** INFO | 2021-05-05 19:43:14 | openldap GID/UID
*** INFO | 2021-05-05 19:43:14 | -------------------------------------
*** INFO | 2021-05-05 19:43:14 | User uid: 911
*** INFO | 2021-05-05 19:43:14 | User gid: 911
*** INFO | 2021-05-05 19:43:14 | uid/gid changed: false
*** INFO | 2021-05-05 19:43:14 | -------------------------------------
*** INFO | 2021-05-05 19:43:14 | updating file uid/gid ownership
*** INFO | 2021-05-05 19:43:14 | No certificate file and certificate key provided, generate:
*** INFO | 2021-05-05 19:43:14 | /container/run/service/slapd/assets/certs/tls.crt and /container/run/service/slapd/assets/certs/tls.key
2021/05/05 19:43:14 [INFO] generate received request
2021/05/05 19:43:14 [INFO] received CSR
2021/05/05 19:43:14 [INFO] generating key: ecdsa-384
2021/05/05 19:43:14 [INFO] encoded CSR
2021/05/05 19:43:14 [INFO] signed certificate with serial number 116630929021868969892101848881681016104120383985
mv: cannot move '/tmp/cert.pem' to '/container/run/service/slapd/assets/certs/tls.crt': No such file or directory
mv: cannot move '/tmp/cert-key.pem' to '/container/run/service/slapd/assets/certs/tls.key': No such file or directory
*** INFO | 2021-05-05 19:43:14 | Link /container/service/:ssl-tools/assets/default-ca/default-ca.pem to /container/run/service/slapd/assets/certs/ca.crt
ln: failed to create symbolic link '/container/run/service/slapd/assets/certs/ca.crt': No such file or directory
Generating DH parameters, 2048 bit long safe prime, generator 2
This is going to take a long time
**** CERT GENERATES ***
*** WARNING | 2021-05-05 19:45:02 | An error occurred. Aborting.
*** INFO | 2021-05-05 19:45:02 | Shutting down /container/run/startup/slapd (PID 11)...
*** WARNING | 2021-05-05 19:45:02 | Init system aborted.
*** INFO | 2021-05-05 19:45:02 | Killing all processes...
Can be recreated by
helm repo add helm-openldap https://jp-gouin.github.io/helm-openldap/
helm install openldap helm-openldap/openldap-stack-ha -n my-namespace
Hi,
thanks for this helm chart and the work which went into it!
We've been running a phpadmin & LDAP containers for a while and would like to switch to helm now.
We've read the instructions how to get phpadmin working but I cannot get the connection between phpldapadmin and ldap to work.
Here's how we do it currently:
set {
name = "phpldapadmin.env.PHPLDAPADMIN_LDAP_HOSTS"
value = "<namespace>.<subdomain>.example.org"
}
set {
name = "phpldapadmin.ingress.hosts[0]"
value = "<subdomain>.example.org"
}
set {
name = "adminPassword"
value = var.OPENLDAP_PASS
}
set {
name = "phpldapadmin.ingress.enabled"
value = true
}
We can reach the service via https://.example.org.
When logging in using the admin
user (is that correct?) and the password which is passed as a secret, we see
Unable to connect to LDAP server openldap
--
Error: Can't contact LDAP server (-1) for user
So it seems something is still borked with the connection between phpldapadmin pod to the ldap pods. Is there a way in k8s to check which value would be the correct one for PHPLDAPADMIN_LDAP_HOSTS
? Or is there something else wrong in our setup?
Hi,
Can you release the helm chart with all the work you do aroun all the probe ?
Im' currently facing some problems with the readinessProbe, which is a little bit to low...
Thank you and for the work you.
k8s 1.6.2:
helm install --name ldap helm-openldap/ --tls
Error: YAML parse error on openldap/charts/ltb-passwd/templates/ingress.yaml: error converting YAML to JSON: yaml: line 5: mapping values are not allowed in this context
I only changed storage-class and size in values.
Also setting both phpldapadmin and ltb-passwd to false:
Error: validation failed: unable to recognize "": no matches for kind "StatefulSet" in version "apps/v1beta2" ( It's apps/v1 in k8s 1.6.2)
Having this done, the pods crashing (interesing enough - only two are visible) with
read_config: no serverID / URL match found. Check slapd -h arguments
Is your feature request related to a problem? Please describe.
I am exposing ldap within the cluster and dont want ssl. the ltb-passwd chart default to using ldaps.
Describe the solution you'd like
There should be option to select ldap or ldaps port
Describe alternatives you've considered
Creating a separate chart for ltb-passwd
Additional context
Add any other context or screenshots about the feature request here.
Hello,
I am unable to get OpenLDAP running on a 1.21.1 Kubernetes cluster, I do get this error
2021-06-30T10:25:49.549204376+02:00 60dc2a8d @(#) $OpenLDAP: slapd 2.4.57+dfsg-1~bpo10+1 (Jan 30 2021 06:59:51) $
2021-06-30T10:25:49.549230976+02:00 Debian OpenLDAP Maintainers <[email protected]>
2021-06-30T10:25:49.598702423+02:00 60dc2a8d olcMirrorMode: value #0: <olcMirrorMode> database is not a shadow
2021-06-30T10:25:49.598734095+02:00 60dc2a8d config error processing olcDatabase={0}config,cn=config: <olcMirrorMode> database is not a shadow
2021-06-30T10:25:49.598738830+02:00 60dc2a8d slapd stopped.
2021-06-30T10:25:49.598743525+02:00 60dc2a8d connections_destroy: nothing to destroy.
I am using this helm install:
helm install openldap helm-openldap/openldap-stack-ha \
--namespace openldap \
--create-namespace \
--set replicaCount=1 \
--set replication.enabled=false \
--set image.tag=1.5.0 \
--set-string logLevel="trace" \
--set-string env.LDAP_ORGANISATION="Test LDAP" \
--set-string env.LDAP_DOMAIN="ldap.internal.xxxxxxx.com" \
--set-string env.LDAP_BACKEND="mdb" \
--set-string env.LDAP_TLS="true" \
--set-string env.LDAP_TLS_ENFORCE="false" \
--set-string env.LDAP_REMOVE_CONFIG_AFTER_SETUP="true" \
--set-string env.LDAP_ADMIN_PASSWORD="admin" \
--set-string env.LDAP_CONFIG_PASSWORD="config" \
--set-string env.LDAP_READONLY_USER="true" \
--set-string env.LDAP_READONLY_USER_USERNAME="readonly" \
--set-string env.LDAP_READONLY_USER_PASSWORD="password"
Any help would be appreciated. Thanks!
I was able to get the openldap backup to work using slapd service but the restore part didn't seem to work so do we have any workaround for backup and restore, as it is a very important feature.
Found a issue #42 related to this which seems to have been marked won't fix and closed.
Is your feature request related to a problem? Please describe.
The actual Ci/CD is ran outside Github.
Use Github action to install the chart , perform chaos test and ldap actions
Use selenium to test phpldapadmin and self service password integration with ldap
Trigger on PR
Make Ltb-passwd use the secret
created with openldap.
Edit the deployment.yaml
of ltb-passwd to variabilyze BINDDN
and BINDPW
Update the note.txt with
rename the chart name in the note
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Everything works
Screenshots
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
ldap-phpldapadmin-5cbd44ffc5-c55t6 1/1 Running 0 2m20s
ldap-0 0/1 CrashLoopBackOff 4 (51s ago) 2m20s
Additional context
There is this error in the logs:
$ kubectl logs ldap-0
sed: -e expression #1, char 30: unknown option to `s'
Describe the bug
Error in stateful set:
create Pod openldap-0 in StatefulSet openldap failed error: Pod "openldap-0" is invalid: [spec.volumes[1].secret.secretName: Required value, spec.initContainers[0].volumeMounts[1].name: Not found: "secret-certs"]
Any suggestions what I'm doing wrong, or if something in the helm Chart is wrong? :) Thank you!
To Reproduce
Steps to reproduce the behavior:
helm repo add helm-openldap https://jp-gouin.github.io/helm-openldap/
values-openldap.yaml
global:
imageRegistry: ""
## E.g.
## imagePullSecrets:
## - myRegistryKeySecretName
##
imagePullSecrets: []
storageClass: "fast-disks"
ldapDomain: "test.local
helm install openldap helm-openldap/openldap-stack-ha -f values-openldap.yaml
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.6", GitCommit:"ad3338546da947756e8a88aa6822e9c11e7eac22", GitTreeState:"clean", BuildDate:"2022-04-14T08:49:13Z", GoVersion:"go1.17.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.6", GitCommit:"ad3338546da947756e8a88aa6822e9c11e7eac22", GitTreeState:"clean", BuildDate:"2022-04-14T08:43:11Z", GoVersion:"go1.17.9", Compiler:"gc", Platform:"linux/amd64"}
Additional context
Add any other context about the problem here.
Hello Jean Philippe,
Release tags in this repo are a bit confusing. Usually they are represents helm chart versions.
What do you think about updating them to align with Chart versions?
Btw, page on Artifacthub a bit outdated and confusing. I found that chart version is wrong and configuration part belongs to old release.
Please let me know if I can help!
Regards,
Roman
For me it was convinient to set the LDAP_BASE_DN
via the ENV file, since the image supports it ( see https://github.com/EugenMayer/docker-rancher-extra-catalogs/blob/master/templates/openldap/26/docker-compose.yml#L14 )
Are there any particular reasons you do not support setting this out of the box, are there technical issues you are trying to avoid?
Great to have your insight / design decision here, thanks!
Describe the bug
ldap-ltb-passwd keeps crashing because of nginx missing file
To reproduce
Steps to reproduce the behavior:
Expected behavior
$ kubectl get pod | grep ldap-ltb-passwd
ldap-ltb-passwd-5494c456c-nkhf4 1/1 Running 0 1m
Screenshots
$ kubectl logs deployment/ldap-ltb-passwd
2022-06-24.09:24:20 [STARTING] ** [nginx] [24] Starting nginx 1.23.0
nginx: [emerg] open() "/etc/nginx/nginx.conf.d/php-fpm.conf" failed (2: No such file or directory) in /etc/nginx/sites.available/ssp.conf:11
Describe the bug
Installation fails on K8s v1.19.7
To Reproduce
Steps to reproduce the behavior:
helm repo add helm-openldap https://jp-gouin.github.io/helm-openldap/
helm --namespace iam upgrade --install openldap helm-openldap/openldap-stack-ha -f values-openldap.yaml
Error message
kubectl describe statefulset ...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 45s statefulset-controller create Claim data-openldap-openldap-stack-ha-0 Pod openldap-openldap-stack-ha-0 in StatefulSet openldap-openldap-stack-ha success
Warning FailedCreate 4s (x14 over 45s) statefulset-controller create Pod openldap-openldap-stack-ha-0 in StatefulSet openldap-openldap-stack-ha failed error: Pod "openldap-openldap-stack-ha-0" is invalid: [spec.volumes[1].secret.secretName: Required value, spec.initContainers[0].volumeMounts[1].name: Not found: "secret-certs"]
Additional context
values-openldap.yaml
:
env:
LDAP_ORGANISATION: "redacted"
LDAP_DOMAIN: "redacted"
LDAP_READONLY_USER_PASSWORD: "redacted"
service:
annotations: {} #TODO: set dns record in internal dns
type: LoadBalancer
persistence:
enabled: true
accessModes:
- ReadWriteOnce
size: 8Gi
adminPassword: redacted
configPassword: redacted
ltb-passwd:
enabled : false
phpldapadmin:
enabled: false
Describe the bug
ldif file placed at /container/service/slapd/assets/config/bootstrap/ldif/custom is not getting applied with ldapmodify
To Reproduce
Steps to reproduce the behavior:
added below in my values.yaml according to the documentation of
/container/service/slapd/assets/config/bootstrap/ldif/custom
customFileSets:
but its not taking effect.
Its not modifying the ldap configuration with ldapmodify
Is there any reason you are no longer updated on https://artifacthub.io/packages/helm/helm-openldap/openldap ?
I find the platform fairly handy to follow / watch releases and search for existing charts across chart repositories.
Any way to help you on board it once again or is it by design and you do no like that to happen? Happy to hear your thoughts on that. Thanks!
Hi,
I've configured the OpenLDAP overlay Audit Logging see [OpenLDAP Software 2.4 Administrator's Guide
12.2. Audit Logging](https://www.openldap.org/doc/admin24/overlays.html) so that now all my audit events are written into a specific file. Now I would like to export the content of this audit log file to a remote destination (in my case my ELK stack).
Moreover, since the main slapd process is launched with a non root user (which is fine) the process has no permission to write into the /var/log folder.
It would be very convenient if I could add some extra containers to the Pod template of the StatefulSet.
For example, Helm Charts provided by Bitnami always have the ability to declare some (extra) sidecars and initContainers next to the default ones.
With such a feature I could declare an initContainer to set the right permissions to write the auditlog file and also a sidecar to run the necessary logic to export its content at a remote destination.
Hi,
I'm trying to configure customLdifFiles
and noticed the bootstrap section they're normally loaded during container startup was skipped because of the presence of the below files.
Further digging showed the below files were added shortly after container startup, but before startup.sh
was executed.
I think these files may be generated by slapadd or slapd but I can't figure out when they would be invoked during container startup
Where are these files coming from and how can I prevent them from being created and causing my customLdifFiles
to be ignored?
Thank you!
/var/lib/ldap/DUMMY
/etc/ldap/slapd.d/cn=config/olcDatabase={0}config.ldif
/etc/ldap/slapd.d/cn=config/cn=module{0}.ldif
/etc/ldap/slapd.d/cn=config/cn=schema.ldif
/etc/ldap/slapd.d/cn=config/olcDatabase={-1}frontend.ldif
/etc/ldap/slapd.d/cn=config/cn=schema/cn={3}inetorgperson.ldif
/etc/ldap/slapd.d/cn=config/cn=schema/cn={2}nis.ldif
/etc/ldap/slapd.d/cn=config/cn=schema/cn={1}cosine.ldif
/etc/ldap/slapd.d/cn=config/cn=schema/cn={0}core.ldif
/etc/ldap/slapd.d/cn=config.ldif
I just wanted to ask if there is a option to restore an ldif backup from an existing instance?
You can close it if there is already a documentation :)
Describe the bug
The default values.yaml doesn't set the correct LDAP port.
To Reproduce
Steps to reproduce the behavior:
I disabled ldap-ltb-passwd because of #65
Expected behavior
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
ldap-phpldapadmin-5cbd44ffc5-xd7xd 1/1 Running 0 1m
ldap-0 1/1 Running 0 1m
ldap-1 1/1 Running 0 1m
ldap-2 1/1 Running 0 1m
Screenshots
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
ldap-phpldapadmin-5cbd44ffc5-c55t6 1/1 Running 0 2m20s
ldap-0 0/1 CrashLoopBackOff 4 (51s ago) 2m20s
$ kubectl logs ldap-0 | grep LDAP_PORT
...
*** DEBUG | 2022-06-24 09:44:17 | LDAP_PORT = tcp://10.43.219.179:389
...
Possible Solutions
Add to values.yaml:
env:
...
LDAP_PORT: "389"
LDAPS_PORT: "686" # I am not sure this one is required.
I want to set the log level of slapd and I can't do it. The underlying image gets the log level from the LDAP_LOG_LEVEL environment variable but there doesn't seem to be a way to pass its value through the Helm chart.
I would like to be able to set the container LDAP_LOG_LEVEL, either through a dedicated Helm chart variable or through a custom envvar file mounted on the container /container/environment/01-custom
dir as the image README suggests.
I am facing moderate pain trying to start an HA instance with customLdifFiles
.
It seems this might be related to #31.
There, the issue that bootstrapping is skipped if one of the following dirs is not empty:
In my current instance, I see bootstrapping is skipped:
*** INFO | 2021-10-30 16:17:44 | Start OpenLDAP...
*** INFO | 2021-10-30 16:17:44 | Waiting for OpenLDAP to start...
*** INFO | 2021-10-30 16:17:44 | Add TLS config...
*** INFO | 2021-10-30 16:17:46 | Add replication config...
*** INFO | 2021-10-30 16:17:50 | Stop OpenLDAP...
*** INFO | 2021-10-30 16:17:50 | Configure ldap client TLS configuration...
*** INFO | 2021-10-30 16:17:50 | Remove config files...
*** INFO | 2021-10-30 16:17:50 | First start is done...
*** INFO | 2021-10-30 16:17:50 | Remove file /container/environment/99-default/default.startup.yaml
*** INFO | 2021-10-30 16:17:50 | Environment files will be proccessed in this order :
Caution: previously defined variables will not be overriden.
/container/environment/99-default/default.yaml
In /etc/ldap/slapd.d
I see the following files
cn=config/ docker-openldap-was-admin-password-set docker-openldap-was-started-with-tls
cn=config.ldif docker-openldap-was-started-with-replication
/var/lib/ldap
contains the database, which might be empty during the first start.
I've deployed fresh, i.e. with no PV and no PVC. Still, the bootstrapping is skipped.
This let's me assume that something writes into this dir before https://github.com/osixia/docker-openldap/blob/v1.5.0/image/service/slapd/startup.sh#L182-L183 is reached.
I also checked with logLevel: debug
, however there is no debugging line indicating why Bootstrapping might be skipped, so this action is not really helping.
Maybe @ivan-c can share how he made bootstrapping work?
@jp-gouin Are tests still working with respect to this as you mentioned in #31 (comment)?
adding custom ldif results in container error:
chmod: cannot access '/container/service/slapd/assets/certs/dhparam.pem': No such file or directory
*** ERROR | 2021-05-07 00:22:57 | /container/run/startup/slapd failed with status 1
*** INFO | 2021-05-07 00:22:57 | Killing all processes...
pod spawns correctly without any custom ldif
yaml is
# Custom openldap configuration files used to override default settings
customLdifFiles:
app.ldif: |-
dn: cn=app-user,dc=app,dc=dev,dc=example,dc=net
userPassword: app-user-secret
description: user
objectClass: simpleSecurityObject
objectClass: organizationalRole
cn: app
Describe the bug
A clear and concise description of what the bug is.
To Reproduce
Steps to reproduce the behavior:
配置
openldap-vaules.yaml
global:
storageClass: "gp2"
ldapDomain: onwalk.net
adminPassword: {{ ExtraVars.password }}
configPassword: {{ ExtraVars.password }}
replicaCount: 1
customTLS:
enabled: true
secret: "openldap-tls" # 包含ca的pem 和key
ltb-passwd:
ingress:
enabled: true
hosts:
- "ldap-ltb.onwalk.net"
phpldapadmin:
enabled: true
ingress:
enabled: true
hosts:
- ldap-admin.onwalk.net
env:
PHPLDAPADMIN_LDAP_CLIENT_TLS_REQCERT: "never"
When bootstrapping a deployment with an existing Ldif file via customLdifFiles
, the entry may not be removed afterwards as otherwise the deployments will fail.
They search for a symlink to this file but can't find it if the value gets removed from values.yml
.
One needs to keep at least an empty content:
customLdifFiles:
01-default-users.ldif: |-
Boostrapping happens only during the first deployment in a fresh PV, so removing the content afterwards does not do any harm.
Not sure if this qualifies as "bug" but I wanted to have mentioned it here at least :)
Add metric exporter as Sidecar of the statefullset
Describe the bug
serviceAccount.name
configuration is not working. It always set to default
sa.
This is my values.yaml
.
serviceAccount:
create: true
name: "ldap-sa"
To Reproduce
Steps to reproduce the behavior:
kubectl describe statefulset openldap
Additional context
ServiceAccount configuration feature has been added at 7b9c55f. But why are these in the comment? Is there a special reason?
helm-openldap/templates/statefullset.yaml
Lines 75 to 77 in 43f5e39
Describe the bug
The podAnnotations
setting in values.yaml
does not get applied to the OpenLDAP pods.
To Reproduce
Steps to reproduce the behavior:
--set podAnnotations=name=value
openldap-stack-ha-0
podname=value
does not appear in the list of annotationsExpected behavior
Annotations in the podAnnotations
field are expected to be applied to the openldap-stack-ha-* pods, per the documentation.
Screenshots
NA
Desktop (please complete the following information):
Smartphone (please complete the following information):
Additional context
NA
Hello,
Is that possible to add an olcAccess
rule as in the customLdifFiles
section?
Here's my customLdifFiles
parameter of the Helm values configuration (simplified):
customLdifFiles:
02-t.example.com.ldif: |-
version: 1
dn: dc=t,dc=example,dc=com
associateddomain: t.example.com
dc: t
objectclass: dNSDomain
objectclass: domainRelatedObject
objectclass: top
03-infra.t.example.com.ldif: |-
version: 1
dn: dc=infra,dc=t,dc=example,dc=com
associateddomain: infra.t.example.com
dc: infra
objectclass: dNSDomain
objectclass: domainRelatedObject
objectclass: top
99-access_rules.ldif: |-
version: 1
dn: olcdatabase={1}mdb,cn=config
changetype: modify
add: olcaccess
olcaccess: to dn.subtree="dc=infra,dc=t,dc=example,dc=com" by dn.exact="uid=admin,dc=infra,dc=t,dc=example,dc=com" manage by dn.exact="uid=odmin,dc=infra,dc=t,dc=example,dc=com" read
As I can see, 02-t.example.com.ldif
and 03-infra.t.example.com.ldif
get applied without any difficulties, but 99-access_rules.ldif
doesn't.
When the container starts, I exec bash (kubectl exec ...
) and I see this file in the /container/service/slapd/assets/config/bootstrap/ldif/custom
directory. Moreover, I can apply it manually (ldapadd -H ldapi:/// -Y EXTERNAL < /container/service/slapd/assets/config/bootstrap/ldif/custom/99-access_rules.ldif
).
Why doesn't it being applied during the initialization process?
Thanks in advance.
See the changelogs
- Remove environment variable LDAP_TLS_PROTOCOL_MIN as it takes no effect, see #69
Hello @jp-gouin, thanks for all your hard work on this chart. I'm new to heml and k8s.
I'm trying to setup a ldap cluster and ldapadmin. The problem is that ldapadmin page is not loading right with the path: / and the default type: ImplementationSpecific and if I try to change the path type to Prefix, the setting is ignored and the path type of ImplementaionSpecific is used. If I manually edit the ingress setting to pathType: Prefix, the page loads correctly.
To Reproduce
Steps to reproduce the behavior:
the ldapadmin part of the config file is:
The ingress config of the running pod looks like this:
Expected behavior
PHP ldap admin to load at: https://phpldapadmin.{domain}
Screenshots
see attached screenshot:
Desktop (please complete the following information):
Thanks.
Templatisation of the replication configuration
Edit the configg map with this info :
LDAP_REPLICATION_CONFIG_SYNCPROV: "binddn=\"cn=admin,cn=config\" bindmethod=simple credentials=$LDAP_CONFIG_PASSWORD searchbase=\"cn=config\" type=refreshAndPersist retry=\"60 +\" timeout=1 "
LDAP_REPLICATION_DB_SYNCPROV: "binddn=\"cn=admin,$LDAP_BASE_DN\" bindmethod=simple credentials=$LDAP_ADMIN_PASSWORD searchbase=\"$LDAP_BASE_DN\" type=refreshAndPersist interval=00:00:00:10 retry=\"60 +\" timeout=1 "
To add the interval
and timeout
in the values.yaml
currently the app version of the chart is 2.4.57, but the image version is 2.5.0 already. We should bump it to reflect it
Hi @jp-gouin , I see the same issue as last time,
the ldap pods just won't start. I posted logs in the last issue, not sure if it's notifying, as it's "closed"
interesting enough, I wasn't able to expand the existing setup to 3 pods, the 3rd pod wouldn't start. But I was able to "clone" the 2-pod-setup to my new cluster, and it works.
I'm still not able to install it with helm install (using helm3, openldap 1.3.0 image)
maybe you have a minute to look at this and give me some hint.
Describe the bug
Not able to port-forward port 389
To Reproduce
Steps to reproduce the behavior:
kubectl port-forward openldap-phpldapadmin-cfcc57847-9d7bm 3890:389 -n tools
Error:
Forwarding from 127.0.0.1:3890 -> 389
Forwarding from [::1]:3890 -> 389
Handling connection for 3890
E0316 18:59:44.217607 65888 portforward.go:400] an error occurred forwarding 3890 -> 389: error forwarding port 389 to pod 180293b77f6fd52a2a7ba76457212268e23f2a612f5670073fda231d373c978a, uid : failed to execute portforward in network namespace "/var/run/netns/cni-e814d0a1-605e-a6c4-9158-9cf87936d975": failed to dial 389: dial tcp4 127.0.0.1:389: connect: connection refused
Expected behavior
Able to port-forward port 389 to be able to connect from other external machines
Context
kind v0.10.0 go1.15.7 darwin/amd64
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:50:19Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-21T01:11:42Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.