Code Monkey home page Code Monkey logo

gluu-docker's Introduction

This repo has been depreciated in favor of https://github.com/GluuFederation/cloud-native-edition.

Version Status Release Date Community EOL Date Enterprise EOL Date
3.1.6.x Active April 2019 October 2020 April 2021

Gluu Server Docker Edition

Gluu Server Docker Edition Documentation

Code Repositories

Repositories for supported images are shown below:

Image Repositories

Images are hosted at Docker Hub:

Examples

Single Host

  • The directory contains README.md as a guide to deploy a basic single-host Gluu server stack.

Swarm

  • The directory contains README.md as a guide to deploy a basic multi-hosts Gluu server stack.

Google Kubernetes Engine

  • The directory contains README.md as a guide to deploy a basic Gluu server stack on Google Kubernetes Engine.

Minikube

  • The directory contains README.md as a guide to deploy a basic Gluu server stack on Minikube.

AWS

  • The directory contains README.md as a guide to deploy a basic Gluu server stack on AWS.

Issues

If you find any issues, please post them on the customer support portal, support.gluu.org.

gluu-docker's People

Contributors

ayowel avatar ggallard avatar iromli avatar moabu avatar nynymike avatar shmorri avatar shouro avatar willow9886 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gluu-docker's Issues

nginx unable to resolve new container IP address in swarm mode

Problem

In multi-host swarm-based scenario, when a service is scaled up/down, NGINX is unable to see the IP changes in service, until nginx is reloaded.

For illustration:

upstream oxauth_backend {
        server oxauth:8080;
}


location /oxauth {
        proxy_pass http://oxauth_backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-Host $host:$server_port;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_redirect off;
}

Given a service called oxauth, running nslookup oxauth 127.0.0.11 inside any container will produce correct IP list:

Server:         127.0.1.11
Address:        127.0.1.11

Name:   oxauth
Address: 172.9.0.2

After running docker service scale=n (where n is the number of how many container should be running), the nslookup oxauth command still produce a correct IP list:

Server:         127.0.1.11
Address:        127.0.1.11

Name:   oxauth
Address: 172.9.0.2

Name:   oxauth
Address: 172.9.0.3

But the problem is nginx only see the cached IP address (172.9.0.2), so when a oxauth (172.9.0.2) container dies, nginx won't redirect the traffic to non-recognized oxauth container (172.9.0.3).

Solution

There are 2 solutions:

  1. Set upstream in nginx location block instead.

    resolver 127.0.0.11 valid=10s;
    location /oxauth {
         set $oxauth_backend oxauth:8080;
         proxy_pass http://$oxauth_backend;
         proxy_set_header Host $host;
         proxy_set_header X-Real-IP $remote_addr;
         proxy_set_header X-Forwarded-Host $host:$server_port;
         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
         proxy_set_header X-Forwarded-Proto $scheme;
         proxy_redirect off;
    }
    

    Pros:

    • oxauth container IP resolved in runtime (no need to reload nginx).

    Cons:

    • Can't use load-balancing algorithms.
  2. Set upstream in nginx location block instead, pointed to another reverse-proxy (i.e. Træfik).

    resolver 127.0.0.11 valid=10s;
    location /oxauth {
         set $oxauth_backend traefik:80;
         proxy_pass http://$oxauth_backend;
         # set Host header to domain registered in traefik
         proxy_set_header Host oxauth.docker.localhost;
         proxy_set_header X-Real-IP $remote_addr;
         proxy_set_header X-Forwarded-Host $host:$server_port;
         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
         proxy_set_header X-Forwarded-Proto $scheme;
         proxy_redirect off;
    }
    

    Pros:

    • oxauth container IP resolved in runtime (no need to reload nginx).
    • traefik supports load-balancing algorithms

    Cons:

    • only able to connect to Swarm manager hence scaling traefik equals adding more manager nodes

Consul stuck at Check "service:<container-id>:oxauth:8080"

I have not been able to get run_all.sh to work on my OS X Sierra. Upon checking

docker logs -f consul

I see

    2018/07/05 07:44:06 [INFO] agent: Synced service "c3d3658a6a8d:oxauth:8080"
    2018/07/05 07:44:06 [INFO] agent: Synced service "c3d3658a6a8d:oxshibboleth:8080"
    2018/07/05 07:44:10 [WARN] agent: Check "service:c3d3658a6a8d:oxauth:8080" HTTP request failed: Get http://172.18.0.4:8080/oxauth/.well-known/openid-configuration: dial tcp 172.18.0.4:8080: connect: connection refused
    2018/07/05 07:44:14 [WARN] agent: Check "service:bad3015e27bb:oxauth:8080" HTTP request failed: Get http://172.18.0.5:8080/oxauth/.well-known/openid-configuration: dial tcp 172.18.0.5:8080: connect: connection refused
    2018/07/05 07:44:25 [WARN] agent: Check "service:c3d3658a6a8d:oxauth:8080" HTTP request failed: Get http://172.18.0.4:8080/oxauth/.well-known/openid-configuration: dial tcp 172.18.0.4:8080: connect: connection refused

All other services are waiting on consul to be ready. For example:

docker logs -f ldap gives

2018-07-05 07:43:09,031 [INFO] [wait-for-it] - Hi world, waiting for Consul to be ready before running /opt/scripts/entrypoint.sh
2018-07-05 07:48:09,942 [ERROR] [wait-for-it] - Consul not ready, after 300 seconds.
2018-07-05 07:48:12,392 [INFO] [wait-for-it] - Hi world, waiting for Consul to be ready before running /opt/scripts/entrypoint.sh

docker logs -f oxauth gives

2018-07-05 07:50:13,956 [WARNING] [wait-for-it] - Consul not populated yet, waiting for key=gluu/config/oxauth_openid_jwks_fn
2018-07-05 07:50:18,968 [WARNING] [wait-for-it] - Consul not populated yet, waiting for key=gluu/config/oxauth_openid_jwks_fn
2018-07-05 07:50:23,983 [WARNING] [wait-for-it] - Consul not populated yet, waiting for key=gluu/config/oxauth_openid_jwks_fn
2018-07-05 07:50:28,999 [WARNING] [wait-for-it] - Consul not populated yet, waiting for key=gluu/config/oxauth_openid_jwks_fn

Am I missing something?

Use oxTrust and oxAuth v2.1

Currently we're using oxTrust and oxAuth v1.7.0. I have problem when running the oxTrust and logged in using admin account. The error message was oxTrust unable to access user data.

I have talked to @mzico about this error, and basically @mzico suggested to upgrade our oxTrust and oxAuth images to use v2.1.

Single host setup does not work

BUG

  1. run script ./gluu-docker/examples/single-host/run_all.sh
  2. containers are created but all of them are in loop of waiting Consul

etcd container with port 8500 is reachable from all containers and there are some traffic between them in tcpdump

CONTAINER ID        IMAGE                                   COMMAND                  CREATED             STATUS              PORTS                                                                      NAMES
04d4831a85e6        gluufederation/nginx:3.1.2_dev          "/opt/scripts/wait..."   18 hours ago        Up 29 seconds       0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp                                   nginx
4e62ccab6557        gluufederation/oxpassport:3.1.2_dev     "/opt/scripts/wait..."   18 hours ago        Up 2 minutes        8090/tcp                                                                   oxpassport
939078fb5e1d        gluufederation/oxauth:3.1.2_dev         "/opt/scripts/wait..."   18 hours ago        Up About a minute   8080/tcp                                                                   oxauth
0f8a3c7baed9        gluufederation/oxtrust:3.1.2_dev        "/opt/scripts/wait..."   18 hours ago        Up About a minute   8080/tcp                                                                   oxtrust
57b2f28475d2        consul                                  "docker-entrypoint..."   18 hours ago        Up 18 hours         8300-8302/tcp, 8301-8302/udp, 8600/tcp, 8600/udp, 0.0.0.0:8500->8500/tcp   consul
77f51f0f3449        gluufederation/opendj:3.1.2_dev         "/opt/scripts/wait..."   18 hours ago        Up About a minute                                                                              ldap
4597bb7e0465        gluufederation/oxshibboleth:3.1.2_dev   "/opt/scripts/wait..."   18 hours ago        Up About a minute   8080/tcp                                                                   oxshibboleth
 docker logs 04
2018-05-29 15:03:47,507 [INFO] [wait-for-it] - Hi world, waiting for Consul to be ready before running /opt/scripts/entrypoint.sh
2018-05-29 15:08:55,100 [ERROR] [wait-for-it] - Consul not ready, after 300 seconds.
2018-05-29 15:08:56,735 [INFO] [wait-for-it] - Hi world, waiting for Consul to be ready before running /opt/scripts/entrypoint.sh
2018-05-29 15:13:57,364 [ERROR] [wait-for-it] - Consul not ready, after 300 seconds.
2018-05-29 15:13:58,277 [INFO] [wait-for-it] - Hi world, waiting for Consul to be ready before running /opt/scripts/entrypoint.sh
2018-05-29 15:18:58,892 [ERROR] [wait-for-it] - Consul not ready, after 300 seconds.
2018-05-29 15:18:59,884 [INFO] [wait-for-it] - Hi world, waiting for Consul to be ready before running /opt/scripts/entrypoint.sh
2018-05-29 15:24:00,528 [ERROR] [wait-for-it] - Consul not ready, after 300 seconds.
2018-05-30 00:21:05,468 [INFO] [wait-for-it] - Hi world, waiting for Consul & LDAP to be ready before running /opt/scripts/entrypoint.sh
2018-05-30 00:21:05,470 [WARNING] [wait-for-it] - Consul not populated yet, waiting for key=gluu/config/oxauth_openid_jwks_fn
2018-05-30 00:21:10,479 [WARNING] [wait-for-it] - Consul not populated yet, waiting for key=gluu/config/oxauth_openid_jwks_fn
2018-05-30 00:21:15,487 [WARNING] [wait-for-it] - Consul not populated yet, waiting for key=gluu/config/oxauth_openid_jwks_fn
2018-05-30 00:21:20,495 [WARNING] [wait-for-it] - Consul not populated yet, waiting for key=gluu/config/oxauth_openid_jwks_fn
2018-05-30 00:21:25,504 [WARNING] [wait-for-it] - Consul not populated yet, waiting for key=gluu/config/oxauth_openid_j
tcpdump -i any -n port 8500
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
11:49:21.624938 IP 172.18.0.3.45462 > 172.18.0.4.8500: Flags [P.], seq 4191934119:4191934300, ack 3044516497, win 346, options [nop,nop,TS val 16912889 ecr 16911637], length 181
11:49:21.624962 IP 172.18.0.3.45462 > 172.18.0.4.8500: Flags [P.], seq 0:181, ack 1, win 346, options [nop,nop,TS val 16912889 ecr 16911637], length 181
11:49:21.625219 IP 172.18.0.4.8500 > 172.18.0.3.45462: Flags [P.], seq 1:178, ack 181, win 352, options [nop,nop,TS val 16912889 ecr 16912889], length 177
11:49:21.625236 IP 172.18.0.4.8500 > 172.18.0.3.45462: Flags [P.], seq 1:178, ack 181, win 352, options [nop,nop,TS val 16912889 ecr 16912889], length 177
11:49:21.625297 IP 172.18.0.3.45462 > 172.18.0.4.8500: Flags [.], ack 178, win 354, options [nop,nop,TS val 16912889 ecr 16912889], length 0
11:49:21.625313 IP 172.18.0.3.45462 > 172.18.0.4.8500: Flags [.], ack 178, win 354, options [nop,nop,TS val 16912889 ecr 16912889], length 0
11:49:21.631752 IP 172.18.0.6.50408 > 172.18.0.4.8500: Flags [P.], seq 4131125301:4131125482, ack 1188845966, win 354, options [nop,nop,TS val 16912891 ecr 16912139], length 181
11:49:21.631768 IP 172.18.0.6.50408 > 172.18.0.4.8500: Flags [P.], seq 0:181, ack 1, win 354, options [nop,nop,TS val 16912891 ecr 16912139], length 181
11:49:21.631979 IP 172.18.0.4.8500 > 172.18.0.6.50408: Flags [P.], seq 1:178, ack 181, win 361, options [nop,nop,TS val 16912891 ecr 16912891], length 177
11:49:21.631995 IP 172.18.0.4.8500 > 172.18.0.6.50408: Flags [P.], seq 1:178, ack 181

Port 8500 was added manually to docker-compose.yml for debug

screenshot from 2018-05-29 18 35 38
screenshot from 2018-05-29 18 36 45

Add Settings for secured consul connection.

Im looking at deploying gluu into an environment that already has a consul and nomad implementation present. However my consul implementation is secured with tls client server authentication in addition to ACL tokens.

For the python consul client to be able to communicate with consul it will need the necessary client server certificates as well as the acl token set.

Additionally i would consider a Vault option for users with a vault implementation and have secrets be stored in vault rather than in consul since consul's KV store isnt really considered to be a secure store.

Better Error Handling In `run_all.sh`

run_all.sh, while initially meant to be a simple automated example of deploying Gluu Server, is now becoming the preferred mechanism to launch Gluu Server on a single host. As this is the case, we need better error handling and mechanisms to show the end user why failure is happening. That will require better error handling.

3.1.4 Testing

3.1.4 will be released soon and we should do testing on the images. This is related to #38 as we need to figure out the upgrade flow regarding containers specifically.

Test Scalability

Gluu Server Docker Edition needs to be scalable across multiple VM's, primarily oxAuth.

Ideally this would improve performance as the workload can be distributed by a load-balancer across the whole cluster.

Consul cluster broken after Swarm manager node restarted

Given 2 Swarm nodes (manager and worker) with consul deployed to each node, we have interesting usecases:

Usecase 1

If manager node stopped, consul container in worker node able to serve request.

Usecase 2

If manager node back again, any consul container unable to serve request. Reading from the log, the leader is gone and both consul instance can't agree on which one being the new leader. Both are claiming they are the leader.

My initial thoughts are this happen because the quorum is established. I'm proposing to deploy 3 instances (1 per Swarm node).

OpenDJ pkcs12 Certificates Do Not Match Between Containers

OpenDJ container:

Keystore type: PKCS12
Keystore provider: SunJSSE

Your keystore contains 1 entry

Alias name: server-cert
Creation date: Mar 20, 2018
Entry type: trustedCertEntry

Owner: CN=172.18.0.3, O=OpenDJ RSA Self-Signed Certificate
Issuer: CN=172.18.0.3, O=OpenDJ RSA Self-Signed Certificate
Serial number: 333f9179
Valid from: Mon Mar 19 22:25:23 UTC 2018 until: Sun Mar 14 22:25:23 UTC 2038
Certificate fingerprints:
         MD5:  CB:DE:67:18:AF:D7:7D:13:74:CE:BB:7F:67:C3:5B:8C
         SHA1: BD:2F:2F:22:04:D1:1B:A4:CB:A9:05:37:36:79:7B:CB:41:91:9B:92
         SHA256: 19:5F:60:50:80:DC:43:E5:55:72:F2:18:A7:01:6B:46:05:9C:C6:1B:0F:56:47:19:E2:2C:3F:07:83:78:CC:62
Signature algorithm name: SHA1withRSA
Subject Public Key Algorithm: 2048-bit RSA key
Version: 3


*******************************************
*******************************************

oxTrust container:

Keystore type: PKCS12
Keystore provider: SunJSSE

Your keystore contains 1 entry

Alias name: server-cert
Creation date: Mar 20, 2018
Entry type: trustedCertEntry

Owner: CN=172.18.0.3, O=OpenDJ RSA Self-Signed Certificate
Issuer: CN=172.18.0.3, O=OpenDJ RSA Self-Signed Certificate
Serial number: 5ec16226
Valid from: Mon Mar 19 22:09:48 UTC 2018 until: Sun Mar 14 22:09:48 UTC 2038
Certificate fingerprints:
         MD5:  F6:61:99:B1:56:E1:14:2B:57:2C:27:8E:82:53:AA:F2
         SHA1: 55:C9:9F:3D:85:8C:13:E3:7B:64:A1:47:DA:C7:76:C4:FC:A2:EF:E4
         SHA256: 35:EB:2A:CE:6D:23:82:4B:95:BB:DD:20:65:9C:3B:04:13:7B:8E:A3:BD:E7:8A:65:86:19:19:99:75:CC:C9:0C
Signature algorithm name: SHA1withRSA
Subject Public Key Algorithm: 2048-bit RSA key
Version: 3


*******************************************
*******************************************

oxAuth container:

Keystore type: PKCS12
Keystore provider: SunJSSE

Your keystore contains 1 entry

Alias name: server-cert
Creation date: Mar 20, 2018
Entry type: trustedCertEntry

Owner: CN=172.18.0.3, O=OpenDJ RSA Self-Signed Certificate
Issuer: CN=172.18.0.3, O=OpenDJ RSA Self-Signed Certificate
Serial number: 5ec16226
Valid from: Mon Mar 19 22:09:48 UTC 2018 until: Sun Mar 14 22:09:48 UTC 2038
Certificate fingerprints:
         MD5:  F6:61:99:B1:56:E1:14:2B:57:2C:27:8E:82:53:AA:F2
         SHA1: 55:C9:9F:3D:85:8C:13:E3:7B:64:A1:47:DA:C7:76:C4:FC:A2:EF:E4
         SHA256: 35:EB:2A:CE:6D:23:82:4B:95:BB:DD:20:65:9C:3B:04:13:7B:8E:A3:BD:E7:8A:65:86:19:19:99:75:CC:C9:0C
Signature algorithm name: SHA1withRSA
Subject Public Key Algorithm: 2048-bit RSA key
Version: 3


*******************************************
*******************************************

Export/Import Old Configuration

Config-init needs to be able to export a configuration file that it can also import to launch the exact same instance again, similar to how setup.properties.last is.

Create oxd Image

run_all.sh failes on Ubuntu 18.04 with complain about No such container: consul

Hi,
I've installed fresh Ubuntu 18.04 LTS server VM. I've installed docker and docker-compose into it as provided by docker project in their repo.
Now, if I run run_all.sh either by common user who is added into the docker group to be able to run docker commands or by root, I still get the same result:

docker: Error response from daemon: No such container: consul.

so I would guess something is still a bit un-stable in Gluu's docker support? Or have I made any mistake on this? Thanks! Karel

Production ready

Hi,
Just wondering on the roadmap for gluu-docker, and when it would be production ready?

Compose file?

Just stumble upon your project, and find it really nice.

I'm positively surprised by this commitment to provide Docker images. I'm just struggling to find the docker run line in your documentation. Until I realized that somehow, you have a python app that is doing it.

Do you plan to release some kind of docker-compose.yml file? Or just a docker run documentation would be enough for me.

Let me know if there is anything I can do to ease your work?

Thanks!

Ubuntu gluucas Dockerfile typo

Hey,

I think there's a typo in the gluucas Dockerfile. It references ASIMBA_DOWNLOAD_URL but I think it should reference CAS_DOWNLOAD_URL.

Best,
SerialVelocity

Make Consul as fallback config storage

Config Management

Currently, the configuration are managed in Consul KV. To allow support for non Consul-based config storage (for example Kubernetes' configmap or Docker Swarm's configs), using Consul should be treated as optional (or leave it as fallback).

As each config storage mentioned above has different design on how to set/get the config, a generic implementation can be used using filesystem. For example, Kubernetes configmap supports for --from-file=/path/to/file for creating config. Given this approach, other containers can use mounted volume where the config file created in configmap can be pulled into. That being said, the container entrypoint should be modified to allow reading config from Consul or local file inside the container.

Container Designs

Given the idea that Consul is optional, there are few things to consider and investigate.

config-init

  1. generate command should create config.json file and add option to save it to Consul or not.
  2. dump and load command might be usable only when Consul is set as config storage.

opendj

Container should not rely on Consul KV:

  1. registering and finding peer for replication should be avoided (investigating)

  2. write base64 string of oxTrust config should be avoided (investigating)

openldap

  1. registering peer should be avoided (investigating)

  2. write base64 string of oxTrust config should be avoided (investigating)

key-rotation

  1. rotated keys value should not be saved into Consul; using mounted volume is sufficient.

  2. last rotation timestamp value should not be saved into Consul; this also removes oxauth's JKS sync script

nginx

Container should add option to not using Consul Catalog for proxying ox backends; re-adding GLUU_OX*_BACKEND env vars can be applied and treated as first-class option

Create shared volume for oxShibboleth and oxTrust

oxShibboleth requires metadata, config, and other files that generated by oxTrust. By utilizing shared volumes, we can distribute those files from oxTrust to oxShibboleth.

Things to consider:

  • volume must be distributed to hosts where oxTrust is deployed
  • volume must be distributed to hosts where oxShibboleth is deployed

Some files/directories that might be required:

  • /opt/gluu/jetty/identity/conf/shibboleth3/idp/
  • /opt/gluu/jetty/identity/conf/shibboleth3/sp/
  • /opt/shibboleth-idp/conf
  • /opt/shibboleth-idp/metadata/
  • /opt/shibboleth-idp/sp/
  • /opt/shibboleth-idp/temp_metadata/
  • /etc/gluu/conf/

generate --state option is required although not ubiquitous.

Hi,
living in EU country where we don't have "State" per se. So I press Enter on State question and then generation obviously failed since command line was still using --state but this time without supplied parameter.
When I edited run_all.sh and removed --state completely, generate starts to complain about

Error: missing option "--state".

would be good to allow generate to run w/o --state option and also to check in run_all.sh for empty state question reply.

The command in `startServices` to gather the host IP doesn't work on Mac

startServices="DOMAIN=$domain HOST_IP=$(ip route get 1 | awk '{print $NF;exit}') docker-compose up -d nginx oxauth oxtrust > /dev/null 2>&1"

Mac doesn't have ip route get and for the sake of ease of use, we need to find an alternative method to gather the host systems ip address that is platform agnostic.

Swarm Horizontal Scaling

oxAuth, and maybe oxTrust, need to be swarm capable, with multiple replicas across multiple VM's to handle load.

Ideally multiple replicas on a single instance should be properly load-balanced, but the goal is to get the replicas spread horizontally across VM's to increase auth/sec through more CPU and RAM.

Allow reading stale data from Consul KV and Catalog

By allowing reading stale data from Consul KV and Catalog, any container can pull the data from any Consul node even when its leader is gone. This will make container less-dependent on Consul leader while also gives Consul cluster re-established (either automatically/manually).

To allow reading stale data, consulate should be replaced by python-consul as the latter support richer features, including consistency mode.

Upgrade Data Layer

The process of upgrading from old version of Gluu Server to new version of Gluu Server generally require some changes to the data layer. So simply switching the containers from 3.1.2 to 3.1.3 isn't a painless process as sometimes new schema is necessary, or additional attributes are added to certain types of entries.

These examples can all be seen here. For 3.1.* it's mostly schema changes, since changing oxTrust or oxAuth from 3.1.2 to 3.1.3 will handle the war file upgrades.

Going forward, we have to have a process by which customers can relatively easily upgrade the data layer to work with newer versions in the future.

Persistence Volumes

For the sake of support, we need to have a few persistent volumes standard in our run calls.

A few for openDJ to persist data:

/opt/opendj/config/ 
/opt/opendj/ldif/
/opt/opendj/logs/
/opt/opendj/db/

And volumes for logs:
oxtrust:

/opt/gluu/jetty/identity/logs/

oxauth:

/opt/gluu/jetty/oxauth/logs/

Will post more as I identify them.

Error While Running Docker Standalone run_all.sh

[I] Preparing cluster-wide configuration
[W] Configuration not found in Consul
[I] Creating new configuration, please input the following parameters
Enter Domain:                 example.gluu.org
Enter Country Code:           US
Enter State:                  TX
Enter City:                   Austin
Enter Email:                  [email protected]
Enter Organization:           VDM
Enter Admin/LDAP Password:    gluu1234!
Continue with the above settings? [Y/n]
[I] Deploying containers
[I] Generating configuration for the first time; this may take a moment
Unable to find image 'gluufederation/config-init:3.1.4_dev' locally
3.1.4_dev: Pulling from gluufederation/config-init
4fe2ade4980c: Already exists
6fc58a8d4ae4: Already exists
d3e6d7e9702a: Already exists
eb67688aeb85: Already exists
aab61efeccb1: Pull complete
260654189cfc: Pull complete
5e7a1cf6615d: Pull complete
ae9e0953b636: Pull complete
70dc7a95ec86: Pull complete
04501bdc547c: Pull complete
7dd6c2d11788: Pull complete
d71fe23c725a: Pull complete
9a3072d39a6c: Pull complete
Digest: sha256:134df964c23db6e16324b767c053c4952ecdada49a195e9633512858c30eff9a
Status: Downloaded newer image for gluufederation/config-init:3.1.4_dev
Config backend is ready.
Generating config.
  adding new key 'encoded_salt'
  adding new key 'orgName'
  adding new key 'country_code'
  adding new key 'state'
  adding new key 'city'
  adding new key 'hostname'
  adding new key 'admin_email'
  adding new key 'default_openid_jks_dn_name'
  adding new key 'pairwiseCalculationKey'
  adding new key 'pairwiseCalculationSalt'
  adding new key 'jetty_base'
  adding new key 'ldap_init_host'
  adding new key 'ldap_init_port'
  adding new key 'ldap_port'
  adding new key 'ldaps_port'
  adding new key 'ldap_truststore_pass'
  adding new key 'ldap_type'
  adding new key 'ldap_binddn'
  adding new key 'ldap_site_binddn'
  adding new key 'ldapTrustStoreFn'
Traceback (most recent call last):
  File "./scripts/entrypoint.py", line 965, in <module>
    cli()
  File "/usr/lib/python2.7/site-packages/click/core.py", line 722, in __call__
    return self.main(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/click/core.py", line 697, in main
    rv = self.invoke(ctx)
  File "/usr/lib/python2.7/site-packages/click/core.py", line 1066, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/lib/python2.7/site-packages/click/core.py", line 895, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/lib/python2.7/site-packages/click/core.py", line 535, in invoke
    return callback(*args, **kwargs)
  File "./scripts/entrypoint.py", line 859, in generate
    inum_appliance)
  File "./scripts/entrypoint.py", line 233, in generate_config
    cfg["city"],
  File "./scripts/entrypoint.py", line 728, in generate_ssl_certkey
    "-subj /C='{}'/ST='{}'/L='{}'/O='{}'/CN='{}'/emailAddress='{}'".format(country_code, state, city, org_name, domain, email),
UnicodeEncodeError: 'ascii' codec can't encode character u'\ufffd' in position 8: ordinal not in range(128)

The issue seems to be from the email address with too many .'s [email protected]

Dockerize Gluu aside Docker Edition

Hi,

I've been testing Docker Edition and it is great but, is it possible to have Gluu Server CE inside a docker container?
Docker images in this repo seem to be designed in order to be included in Docker Edition. Am I wrong? Any help?

Implement Redis Cluster for HA cache data

As Jedis in oxAuth/oxTrust has support for Redis Cluster, it's worth to deploy Redis Cluster.

The architecture will be:

  1. Deploy 3 redis containers for each Swarm node.
  2. Run a one-time command to create redis cluster using redis-trib.rb. This will setup a cluster with 1 master and 2 slaves for each Swarm node.
  3. When there's only 1 Swarm node available, manual failover is needed to ensure there are 3 redis masters in the cluster.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.