bitnami / containers Goto Github PK
View Code? Open in Web Editor NEWBitnami container images
Home Page: https://bitnami.com
License: Other
Bitnami container images
Home Page: https://bitnami.com
License: Other
docker.io/bitnami/phpbb:3.3.8
Hi,
I'm running a phpBB (bitnami/phpbb:3.3.8) instance with (bitnami/mariadb:10.7.4) database.
I've recently update my docker compose file to update from 3.3.7 to 3.3.8.
Now when I try accessing the phpBB instance the compose logs gives me this error.
phpbb | [Tue Jun 28 09:24:59.475985 2022] [php:error] [pid 220] [client 127.0.0.1:59308] PHP Fatal error: Uncaught Twig\\Error\\LoaderError: Unable to find template "cron.html" (looked into: /bitnami/phpbb/styles/prosilver/template, /bitnami/phpbb/styles/prosilver/theme, /bitnami/phpbb/styles/all/template). in /opt/bitnami/phpbb/vendor/twig/twig/src/Loader/FilesystemLoader.php:250\nStack trace:\n#0 /opt/bitnami/phpbb/phpbb/template/twig/loader.php(135): Twig\\Loader\\FilesystemLoader->findTemplate()\n#1 /opt/bitnami/phpbb/vendor/twig/twig/src/Loader/FilesystemLoader.php(150): phpbb\\template\\twig\\loader->findTemplate()\n#2 /opt/bitnami/phpbb/vendor/twig/twig/src/Environment.php(299): Twig\\Loader\\FilesystemLoader->getCacheKey()\n#3 /opt/bitnami/phpbb/vendor/twig/twig/src/Environment.php(381): Twig\\Environment->getTemplateClass()\n#4 /opt/bitnami/phpbb/phpbb/template/twig/environment.php(277): Twig\\Environment->loadTemplate()\n#5 /opt/bitnami/phpbb/vendor/twig/twig/src/Environment.php(359): phpbb\\template\\twig\\environment->loadTemplate()\n#6 /opt/bitnami/phpbb/vendor/twig/twig/src/Environment.php(318): Twig\\Environment->load()\n#7 /opt/bitnami/phpbb/phpbb/template/twig/environment.php(224): Twig\\Environment->render()\n#8 /opt/bitnami/phpbb/phpbb/template/twig/environment.php(186): phpbb\\template\\twig\\environment->display_with_assets()\n#9 /opt/bitnami/phpbb/phpbb/template/twig/twig.php(335): phpbb\\template\\twig\\environment->render()\n#10 /opt/bitnami/phpbb/phpbb/cron/task/wrapper.php(129): phpbb\\template\\twig\\twig->assign_display()\n#11 /opt/bitnami/phpbb/phpbb/controller/helper.php(366): phpbb\\cron\\task\\wrapper->get_html_tag()\n#12 /opt/bitnami/phpbb/phpbb/controller/helper.php(315): phpbb\\controller\\helper->set_cron_task()\n#13 /opt/bitnami/phpbb/includes/functions.php(4276): phpbb\\controller\\helper->display_footer()\n#14 /opt/bitnami/phpbb/index.php(253): page_footer()\n#15 {main}\n thrown in /opt/bitnami/phpbb/vendor/twig/twig/src/Loader/FilesystemLoader.php on line 250
phpbb2 | 127.0.0.1 - - [28/Jun/2022:09:24:59 +0200] "GET / HTTP/1.1" 500 -
I can't find any cron.html file.
Regards.
Upgrade from 3.3.7 to 3.3.8 correctly.
A blank page with a 404 error.
Reverting to the version 3.3.7 now gives me version 3.3.8 on the ACP and gives me back my acces to the board.
I'm wondering if that is a problem with prosilver theme ?
bitnami-docker-spark/3.3/debian-11/Dockerfile
I am looking to upgrade Spark Image which has latest version of Spark i.e. 3.3.0 , I got the Dockerfile which has latest version of Spark , Java , Gosu etc. but found that Python version is downgraded to 3.8.13 . Is it possible to provide new Dockerfile with latest version of Python.
Requesting for Dockefile with latest version of Python i.e. 3.9.*
Python with older version 3.8
No response
I am relatively new to Airflow, Docker, and Bitnami but I am having trouble getting pyodbc to be installed on the bitnami airflow containers. I want to be able to use Airflow on Azure for work projects so that's how I found out about the bitnami-docker-airflow project.
I have followed the directions from this page: https://github.com/bitnami/bitnami-docker-airflow/blob/master/README.md
I started with the curl -sSL https://raw.githubusercontent.com/bitnami/bitnami-docker-airflow/master/docker-compose.yml > docker-compose.yml
and got the docker-compose.yaml file in my documents folder. I then went in to make some changes to mount a local folder with DAG files I wanted to use and mounted another folder that had a requirements.txt file in it.
The lines below in the docker-compose.yml are the ones I added for mounting. ./dags and ./packages are the folders that the DAG .py and requirement.txt files are in. My docker-compose file will be attached.
The requirements.txt has the text pyodbc===4.0.30
as the only content in the file.
I run docker-compose up
to get the containers up and running.
The output shows me that pyodbc install is failing but I can't exactly figure out what the source of the error is and what could fix it. Will attach the copy and pasted output that shows the error, having a hard time interpreting it. I have tried the docker-compose without the requirements file mounting and airflow will start up and I can see my DAGs at localhost:8080 as I would expect. I want to be able to use pyodbc though in my DAGs
Please let me know if I can provide more context
bitnami/spark:3
Create custom spark container image with additional JAR files
Attached docker file has hadoop-azure-3.3.1.jar, azure-storage-8.6.6.jar, and dependencies
Dockerfile.spark.txt
Run produced custom docker image using command similar to below
docker run --rm -it <custom_image_from_above_step>
pyspark --packages org.apache.hadoop:hadoop-azure:3.3.1,com.microsoft.azure:azure-storage:8.6.6
packages must loaded
:hadoop-azure:3.3.1,com.microsoft.azure:azure-storage:8.6.6
Python 3.8.13 (default, Apr 11 2022, 12:27:15)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
:: loading settings :: url = jar:file:/opt/bitnami/spark/jars/ivy-2.5.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
Ivy Default Cache set to: /opt/bitnami/spark/.ivy2/cache
The jars for the packages stored in: /opt/bitnami/spark/.ivy2/jars
org.apache.hadoop#hadoop-azure added as a dependency
com.microsoft.azure#azure-storage added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent-0aaec6b9-5d40-4793-b19d-35af6d4d7168;1.0
confs: [default]
Exception in thread "main" java.io.FileNotFoundException: /opt/bitnami/spark/.ivy2/cache/resolved-org.apache.spark-spark-submit-parent-0aaec6b9-5d40-4793-b19d-35af6d4d7168-1.0.xml (No such file or directory)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at java.io.FileOutputStream.<init>(FileOutputStream.java:162)
at org.apache.ivy.plugins.parser.xml.XmlModuleDescriptorWriter.write(XmlModuleDescriptorWriter.java:71)
at org.apache.ivy.plugins.parser.xml.XmlModuleDescriptorWriter.write(XmlModuleDescriptorWriter.java:63)
at org.apache.ivy.core.module.descriptor.DefaultModuleDescriptor.toIvyFile(DefaultModuleDescriptor.java:553)
at org.apache.ivy.core.cache.DefaultResolutionCacheManager.saveResolvedModuleDescriptor(DefaultResolutionCacheManager.java:183)
at org.apache.ivy.core.resolve.ResolveEngine.resolve(ResolveEngine.java:259)
at org.apache.ivy.Ivy.resolve(Ivy.java:522)
at org.apache.spark.deploy.SparkSubmitUtils$.resolveMavenCoordinates(SparkSubmit.scala:1445)
at org.apache.spark.util.DependencyUtils$.resolveMavenDependencies(DependencyUtils.scala:185)
at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:308)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:898)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1043)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1052)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Traceback (most recent call last):
File "/opt/bitnami/spark/python/pyspark/shell.py", line 35, in <module>
SparkContext._ensure_initialized() # type: ignore
File "/opt/bitnami/spark/python/pyspark/context.py", line 339, in _ensure_initialized
SparkContext._gateway = gateway or launch_gateway(conf)
File "/opt/bitnami/spark/python/pyspark/java_gateway.py", line 108, in launch_gateway
raise RuntimeError("Java gateway process exited before sending its port number")
RuntimeError: Java gateway process exited before sending its port number
FROM bitnami/spark:3
USER root
# Download hadoop-azure, azure-storage, and dependencies (See above)
RUN curl https://repo1.maven.org/maven2/org/apache/hadoop/hadoop-azure/3.3.1/hadoop-azure-3.3.1.jar --output /opt/bitnami/spark/jars/hadoop-azure-3.3.1.jar
RUN curl https://repo1.maven.org/maven2/com/microsoft/azure/azure-storage/8.6.6/azure-storage-8.6.6.jar --output /opt/bitnami/spark/jars/azure-storage-8.6.6.jar
RUN curl https://repo1.maven.org/maven2/org/apache/httpcomponents/httpclient/4.5.13/httpclient-4.5.13.jar --output /opt/bitnami/spark/jars/httpclient-4.5.13.jar
RUN curl https://repo1.maven.org/maven2/org/apache/hadoop/thirdparty/hadoop-shaded-guava/1.1.1/hadoop-shaded-guava-1.1.1.jar --output /opt/bitnami/spark/jars/hadoop-shaded-guava-1.1.1.jar
RUN curl https://repo1.maven.org/maven2/org/eclipse/jetty/jetty-util-ajax/9.4.40.v20210413/jetty-util-ajax-9.4.40.v20210413.jar --output /opt/bitnami/spark/jars/jetty-util-ajax-9.4.40.v20210413.jar
RUN curl https://repo1.maven.org/maven2/org/codehaus/jackson/jackson-mapper-asl/1.9.13/jackson-mapper-asl-1.9.13.jar --output /opt/bitnami/spark/jars/jackson-mapper-asl-1.9.13.jar
RUN curl https://repo1.maven.org/maven2/org/codehaus/jackson/jackson-core-asl/1.9.13/jackson-core-asl-1.9.13.jar --output /opt/bitnami/spark/jars/jackson-core-asl-1.9.13.jar
RUN curl https://repo1.maven.org/maven2/org/wildfly/openssl/wildfly-openssl/1.0.7.Final/wildfly-openssl-1.0.7.Final.jar --output /opt/bitnami/spark/jars/wildfly-openssl-1.0.7.Final.jar
RUN curl https://repo1.maven.org/maven2/org/apache/httpcomponents/httpcore/4.4.13/httpcore-4.4.13.jar --output /opt/bitnami/spark/jars/httpcore-4.4.13.jar
RUN curl https://repo1.maven.org/maven2/commons-logging/commons-logging/1.1.3/commons-logging-1.1.3.jar --output /opt/bitnami/spark/jars/commons-logging-1.1.3.jar
RUN curl https://repo1.maven.org/maven2/commons-codec/commons-codec/1.11/commons-codec-1.11.jar --output /opt/bitnami/spark/jars/commons-codec-1.11.jar
RUN curl https://repo1.maven.org/maven2/org/eclipse/jetty/jetty-util/9.4.40.v20210413/jetty-util-9.4.40.v20210413.jar --output /opt/bitnami/spark/jars/jetty-util-9.4.40.v20210413.jar
RUN curl https://repo1.maven.org/maven2/com/fasterxml/jackson/core/jackson-core/2.9.4/jackson-core-2.9.4.jar --output /opt/bitnami/spark/jars/jackson-core-2.9.4.jar
RUN curl https://repo1.maven.org/maven2/org/slf4j/slf4j-api/1.7.12/slf4j-api-1.7.12.jar --output /opt/bitnami/spark/jars/slf4j-api-1.7.12.jar
RUN curl https://repo1.maven.org/maven2/org/apache/commons/commons-lang3/3.4/commons-lang3-3.4.jar --output /opt/bitnami/spark/jars/commons-lang3-3.4.jar
RUN curl https://repo1.maven.org/maven2/com/microsoft/azure/azure-keyvault-core/1.2.4/azure-keyvault-core-1.2.4.jar --output /opt/bitnami/spark/jars/azure-keyvault-core-1.2.4.jar
RUN curl https://repo1.maven.org/maven2/com/google/guava/guava/24.1.1-jre/guava-24.1.1-jre.jar --output /opt/bitnami/spark/jars/guava-24.1.1-jre.jar
RUN curl https://repo1.maven.org/maven2/com/google/code/findbugs/jsr305/1.3.9/jsr305-1.3.9.jar --output /opt/bitnami/spark/jars/jsr305-1.3.9.jar
RUN curl https://repo1.maven.org/maven2/org/checkerframework/checker-compat-qual/2.0.0/checker-compat-qual-2.0.0.jar --output /opt/bitnami/spark/jars/checker-compat-qual-2.0.0.jar
RUN curl https://repo1.maven.org/maven2/com/google/errorprone/error_prone_annotations/2.1.3/error_prone_annotations-2.1.3.jar --output /opt/bitnami/spark/jars/error_prone_annotations-2.1.3.jar
RUN curl https://repo1.maven.org/maven2/com/google/j2objc/j2objc-annotations/1.1/j2objc-annotations-1.1.jar --output /opt/bitnami/spark/jars/j2objc-annotations-1.1.jar
RUN curl https://repo1.maven.org/maven2/org/codehaus/mojo/animal-sniffer-annotations/1.14/animal-sniffer-annotations-1.14.jar --output /opt/bitnami/spark/jars/animal-sniffer-annotations-1.14.jar
ENV BITNAMI_APP_NAME=spark BITNAMI_IMAGE_VERSION=3.0.0-debian-10-r48 JAVA_HOME=/opt/bitnami/java LD_LIBRARY_PATH=/opt/bitnami/python/lib/:/opt/bitnami/spark/venv/lib/python3.6/site-packages/numpy.libs/: LIBNSS_WRAPPER_PATH=/opt/bitnami/common/lib/libnss_wrapper.so NSS_WRAPPER_GROUP=/opt/bitnami/spark/tmp/nss_group NSS_WRAPPER_PASSWD=/opt/bitnami/spark/tmp/nss_passwd SPARK_HOME=/opt/bitnami/spark
WORKDIR /opt/bitnami/spark
USER 1001
ENTRYPOINT ["/opt/bitnami/scripts/spark/entrypoint.sh"]
CMD ["/opt/bitnami/scripts/spark/run.sh"]
bitnami/wordpress-nginx:6
wp-content/plugins
and wp-content/themes
foldersdocker compose up
cp: cannot create regular file '/bitnami/wordpress/wp-content/index.php': Permission denied
cp: cannot create directory '/bitnami/wordpress/wp-content/languages': Permission denied
cp: cannot create directory '/bitnami/wordpress/wp-content/upgrade': Permission denied
cp: cannot create directory '/bitnami/wordpress/wp-content/uploads': Permission denied
Expected behavior is for the installation to work and complete successfully
cp: cannot create regular file '/bitnami/wordpress/wp-content/index.php': Permission denied
cp: cannot create directory '/bitnami/wordpress/wp-content/languages': Permission denied
cp: cannot create directory '/bitnami/wordpress/wp-content/upgrade': Permission denied
cp: cannot create directory '/bitnami/wordpress/wp-content/uploads': Permission denied
The mapped volumes are owned by root
:
drwxr-xr-x 4 root root 4096 Jun 27 11:02 wp-content
There is something wrong with this image. I'm still investigating the performance issue.
First lets run a simple benchmark with the ruby:2.7
image:
root@localhost:~# docker run -it --rm ruby:2.7 ruby -ve 't = Time.now; i=0;while i<100_000_000;i+=1;end; puts "#{ Time.now - t } sec"'
ruby 2.7.5p203 (2021-11-24 revision f69aeb8314) [x86_64-linux]
2.271846678 sec
Then compare it with bitnami/ruby:2.7-debian-10
image:
root@localhost:~# docker run -it --rm bitnami/ruby:2.7-debian-10 ruby -ve 't = Time.now; i=0;while i<100_000_000;i+=1;end; puts "#{ Time.now - t } sec"'
ruby 2.7.2p137 (2020-10-01 revision 5445e04352) [x86_64-linux]
15.051291622 sec
I think this could be related to seccomp
profile or misconfiguration of the base debian
image. Please see these for more information:
For security considerations the image should be able to run with the docker/kubernetes parameter read-only root file-system and only directories, that need to be written into, mounted as volumes.
At the moment this is not possible and leaves the keycloak container open for attack vectors.
Steps to reproduce the issue:
docker
append --read-only --volume <volume-name>:/mount/path
to the start command
docker-compose
add read_only: true
into the compose file
kubernetes
Add to statefulset container specs
securityContext:
readOnlyRootFilesystem: true
The following command from README.md :
docker run --detach --rm --name test-openldap --network my-network --env LDAP_ADMIN_USERNAME=admin --env LDAP_ADMIN_PASSWORD=adminpassword --env LDAP_USERS=customuser --env LDAP_PASSWORDS=custompassword bitnami/openldap:latest
triggers this error :
# docker logs -f test-openldap
13:48:24.30 INFO ==> ** Starting LDAP setup **
13:48:24.36 INFO ==> Validating settings in LDAP_* env vars
13:48:24.37 INFO ==> Initializing OpenLDAP...
13:48:24.40 INFO ==> Creating LDAP online configuration
13:48:24.42 INFO ==> Starting OpenLDAP server in background
13:48:43.59 INFO ==> Configure LDAP credentials for admin user
13:48:45.05 INFO ==> Adding LDAP extra schemas
13:48:45.74 INFO ==> Creating LDAP default tree
13:48:46.70 INFO ==> ** LDAP setup finished! **
13:48:46.74 INFO ==> ** Starting slapd **
5ee2363e @(#) $OpenLDAP: slapd 2.4.50 (May 4 2020 16:17:50) $
@5fb3c780904c:/bitnami/blacksmith-sandox/openldap-2.4.50/servers/slapd
5ee23655 hdb_db_open: database "dc=example,dc=org": database already in use.
5ee23655 backend_startup_one (type=hdb, suffix="dc=example,dc=org"): bi_db_open failed! (-1)
5ee23655 slapd stopped.
Description
External elasticsearch plugins install successfully, but are misnamed when copied to plugin.mandatory
causing each node to fail to start with an exception.
Steps to reproduce the issue:
docker run --name test -e ELASTICSEARCH_PLUGINS=http://es-learn-to-rank.labs.o19s.com/ltr-plugin-v1.5.1-es7.9.2.zip -d bitnami/elasticsearch:7.9.2-debian-10-r0
Describe the results you received:
Fails to start with exception:
...
uncaught exception in thread [main]
java.lang.IllegalStateException: missing mandatory plugins [ltr-plugin-v1.5.1-es7.9.2], found plugins [ltr, repository-s3]
at org.elasticsearch.plugins.PluginsService.<init>(PluginsService.java:186)
Describe the results you expected:
Bitnami populates the plugin.mandatory
value from ELASTICSEARCH_PLUGINS which prevents a search node from starting if all plugins are not successfully installed. This works well with default (included) plugins which Elasticsearch will install by "plugin name". However when using an external plugin resource (zipfile over http or local filesystem), even though the plugin is installed correctly, the plugin name used to populate plugin.mandatory
is not correctly derived from the plugin resource.
In the example to reproduce (above), the plugin (named ltr
) is correctly installed, however, the value copied to plugin.mandatory
is ltr-plugin-v1.5.1-es7.9.2
which fails the startup check, since it is not also ltr
.
Additional information you deem important (e.g. issue happens only occasionally):
There is an attempt to derive the "plugin name" from the downloaded filename here: https://github.com/bitnami/bitnami-docker-elasticsearch/blob/master/7/debian-10/rootfs/opt/bitnami/scripts/libelasticsearch.sh#L500
However there is no such plugin file-naming convention required by Elastic. Instead, there is a plugin-descriptor.properties
inside every external elasticsearch plugin archive containing a required property name
which should be used instead.
Version
docker version
:Client: Docker Engine - Community
Version: 19.03.5
API version: 1.40
Go version: go1.12.12
Git commit: 633a0ea
Built: Wed Nov 13 07:22:34 2019
OS/Arch: darwin/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.5
API version: 1.40 (minimum version 1.12)
Go version: go1.12.12
Git commit: 633a0ea
Built: Wed Nov 13 07:29:19 2019
OS/Arch: linux/amd64
Experimental: true
containerd:
Version: v1.2.10
GitCommit: b34a5c8af56e510852c35414db4c1f4fa6172339
runc:
Version: 1.0.0-rc8+dev
GitCommit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
docker-init:
Version: 0.18.0
GitCommit: fec3683
docker info
:Client:
Debug Mode: false
Server:
Containers: 19
Running: 0
Paused: 0
Stopped: 19
Images: 32
Server Version: 19.03.5
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: b34a5c8af56e510852c35414db4c1f4fa6172339
runc version: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.19.76-linuxkit
Operating System: Docker Desktop
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 2.924GiB
Name: docker-desktop
ID: CWGO:3SKD:BICU:JSQL:7RFU:MVGE:UPLF:EQVJ:VJSP:GCBO:NTHO:FQQY
Docker Root Dir: /var/lib/docker
Debug Mode: true
File Descriptors: 35
Goroutines: 52
System Time: 2020-10-23T17:13:45.597921481Z
EventsListeners: 3
HTTP Proxy: gateway.docker.internal:3128
HTTPS Proxy: gateway.docker.internal:3129
Registry: https://index.docker.io/v1/
Labels:
Experimental: true
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
docker-compose version
(if applicable):(paste your output here)
Additional environment details (AWS, VirtualBox, Docker for MAC, physical, etc.):
bitnami/mongodb:latest
I realized why my connection to MongoDB was returning error thanks to Studio3t.
|_/ Connection error (MongoSocketWriteException): Exception sending message
|____/ SSL error: No subject alternative names present
|_______/ Certificate error: No subject alternative names present
Update the information. It's misleading for MongoDB 4.1+ users.
No response
bitnami/wordpress:6
docker-compose.yaml
WORDPRESS_ENABLE_HTACCESS_PERSISTENCE=yes
WORDPRESS_HTACCESS_OVERRIDE_NONE=no
or yes
./wordpress_data:/bitnami/wordpress
docker-compose up -d
./wordpress_data/.htaccess
or ./wordpress_data/wordpress-htaccess.conf
docker-compose down && docker-compose up -d
.htaccess
should remain the same as edited when restarting the container.
.htaccess
gets overwritten to a blank file.
I've also tried mounting a local .htaccess
file: ./public:.htaccess:/opt/bitnami/wordpress/.htaccess
, but the container starts and hangs because it tries and fails to remove the /opt/bitnami/wordpress/.htaccess
with the error that it is locked or busy, or for some reason cp: 'wp-config.php' and '/bitnami/wordpress/wp-config.php' are the same file
The docs don't seem to be clear as to what to mount in order for the .htaccess
to be persisted, but the mounting of /bitnami/wordpress
doesn't appear to work.
bitnami/kafka:3.2
docker-compose.yaml
version: '3'
services:
zookeeper:
container_name: zookeeper
image: docker.io/bitnami/zookeeper:3.8
ports:
- "2181:2181"
environment:
- ZOO_ENABLE_AUTH=yes
- ZOO_SERVER_USERS=user
- ZOO_SERVER_PASSWORDS=password
- ZOO_CLIENT_USER=user
- ZOO_CLIENT_PASSWORD=password
kafka:
container_name: kafka
image: docker.io/bitnami/kafka:3.2
ports:
- "9093:9093"
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=false
- KAFKA_ZOOKEEPER_PROTOCOL=SASL
- KAFKA_ZOOKEEPER_USER=user
- KAFKA_ZOOKEEPER_PASSWORD=password
- KAFKA_CLIENT_USERS=user
- KAFKA_CLIENT_PASSWORDS=password
- KAFKA_INTER_BROKER_USER=user
- KAFKA_INTER_BROKER_PASSWORD=password
- KAFKA_CFG_LISTENERS=INTERNAL://:9092,EXTERNAL://:9093
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=INTERNAL:SASL_PLAINTEXT,EXTERNAL:SASL_PLAINTEXT
- KAFKA_CFG_ADVERTISED_LISTENERS=INTERNAL://kafka:9092,EXTERNAL://localhost:9093
- KAFKA_CFG_INTER_BROKER_LISTENER_NAME=INTERNAL
- KAFKA_CFG_SASL_ENABLED_MECHANISMS=SCRAM-SHA-256
- KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL=SCRAM-SHA-256
- KAFKA_CFG_SECURITY_INTER_BROKER_PROTOCOL=SASL_PLAINTEXT
volumes:
- bitnami-config:/opt/bitnami/kafka/config
depends_on:
- zookeeper
kafka_init:
container_name: kafka_init
image: docker.io/bitnami/kafka:3.2
command:
- /opt/bitnami/kafka/bin/kafka-topics.sh
- --create
- --bootstrap-server
- kafka:9092
- --topic
- my-topic
- --partitions
- "1"
- --replication-factor
- "1"
depends_on:
- kafka
environment:
- KAFKA_OPTS=-Djava.security.auth.login.config=/opt/bitnami/kafka/config/kafka_jaas.conf
volumes:
- bitnami-config:/opt/bitnami/kafka/config:ro
volumes:
bitnami-config:
topic gets created
kafka_init
container fails to create topic
kafka
container log shows lots of these:
[2022-07-21 10:26:27,096] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1002] Failed authentication with /172.25.0.4 (channelId=172.25.0.3:9092-172.25.0.4:46096-58) (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
$ cat /opt/bitnami/kafka/config/kafka_jaas.conf
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="user"
password="password";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="user"
password="password";
};
Hi,
First thanks for this high quality container!
I have a very similar issue as described here. In short I'd like to set a ppolicy that forces password hash even if they are sent in clear text by the user, and the client is not respecting the ldap password modify extended operation
.
In the posted link they solve it by adding olcPPolicyHashCleartext
attribute and set to true.
I've naively tried to make a similar modification as described in the last comment but failed (not an ldap expert), probably the DN is different, I get this error:
$ ldapadd -Q -Y EXTERNAL -H ldapi:/// -f _file.ldif
modifying entry "cn=module,cn=config"
ldap_modify: No such object (32)
matched DN: cn=config
Having a flag to turn on this feature would be very nice. Do you have any suggestions on how to achieve this?
Thanks!
bitnami/spark:*
Easy to view logs and code optimization, etc.
Reference documentation: https://spark.apache.org/docs/latest/monitoring.html#viewing-after-the-fact
You can use this command: spark-class org.apache.spark.deploy.history.HistoryServer
I can also customize the HistoryServer myself using apache/spark
bitnami/keycloak:18.0.2-debian-11-r3
keycloak should start up and pass healthchecks
No suitable driver found for jdbc:postgresql://keycloak-postgresql:5432/bitnami_keycloak?currentSchema=public
We rolled back to r0 for the time being, but it looks like the driver got removed in the newer versions?
When starting the bitnami/solr container for the second time with docker-compose, it fails with:
Port 8983 is already being used by another process (pid: 88)
Steps to reproduce the issue:
docker-compose up
docker-compose.yml
file used:version: '3.5'
services:
bitnami-solr:
image: bitnami/solr:8
docker-compose up
Describe the results you received:
On the very first startup, the solr instance starts up correctly and the container keeps running.
On second startup, the solr process is unable to use the same port again, probably because another instance of solr is running in the background and using that port.
The solr instance then exits with exit-code 1, and the container does the same. To be able to start a working solr container, I have to delete the existing container (i.e. with docker-compose down
) and then start again.
Describe the results you expected:
I expect the container-image to be set up to allow for restarting an existing stopped container.
This problem seems to be caused by the ENTRYPOINT or CMD isn't properly handling that solr is already running in the background.
It may also be a mistake that solr is installed as a background service that automatically starts, since the ENTRYPOINT/CMD normally is supposed to be responsible for starting the "main" process of the container.
Additional information you deem important (e.g. issue happens only occasionally):
I can reproduce the error consistently with this extremely minimal docker-compose file.
Interestingly, sometimes the second startup will succeed and it will then fail on the third startup instead. I think this has to do with how quickly the container is stopped after startup.
Version
docker version
:Client:
Cloud integration: v1.0.20
Version: 20.10.10
API version: 1.41
Go version: go1.16.9
Git commit: b485636
Built: Mon Oct 25 07:47:53 2021
OS/Arch: windows/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.10
API version: 1.41 (minimum version 1.12)
Go version: go1.16.9
Git commit: e2f740d
Built: Mon Oct 25 07:41:30 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.11
GitCommit: 5b46e404f6b9f661a205e28d59c982d3634148f8
runc:
Version: 1.0.2
GitCommit: v1.0.2-0-g52b36a2
docker-init:
Version: 0.19.0
GitCommit: de40ad0
docker info
:Client:
Context: default
Debug Mode: false
Plugins:
buildx: Build with BuildKit (Docker Inc., v0.6.3)
compose: Docker Compose (Docker Inc., v2.1.1)
scan: Docker Scan (Docker Inc., 0.9.0)
Server:
Containers: 8
Running: 5
Paused: 0
Stopped: 3
Images: 17
Server Version: 20.10.10
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc io.containerd.runc.v2 io.containerd.runtime.v1.linux
Default Runtime: runc
Init Binary: docker-init
containerd version: 5b46e404f6b9f661a205e28d59c982d3634148f8
runc version: v1.0.2-0-g52b36a2
init version: de40ad0
Security Options:
seccomp
Profile: default
Kernel Version: 5.10.60.1-microsoft-standard-WSL2
Operating System: Docker Desktop
OSType: linux
Architecture: x86_64
CPUs: 16
Total Memory: 24.99GiB
Name: docker-desktop
ID: GIQ4:NRXM:3EIE:WBFD:CUTU:CMIW:T7GJ:VCKM:O3I7:IBBK:SJXR:4BM4
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No blkio throttle.read_bps_device support
WARNING: No blkio throttle.write_bps_device support
WARNING: No blkio throttle.read_iops_device support
WARNING: No blkio throttle.write_iops_device support
docker-compose version
(if applicable):Docker Compose version v2.1.1
Additional environment details (AWS, VirtualBox, Docker for MAC, physical, etc.):
bitnami/opencart4.0.0.0
Looking to see when you might support 4.x
Moving to the latest version
building my own docker container.
bitnami/mongodb:4.4.14
We start with a simple docker-compose.yml
to start a replica set with a single primary node, and a volume to persist db data:
version: '3.9'
services:
mongo:
image: bitnami/mongodb:4.4.14
container_name: mongo
ports:
- 27030:27017
volumes:
- mongodata:/bitnami/mongodb
environment:
- MONGODB_ROOT_USER=root
- MONGODB_ROOT_PASSWORD=root
- MONGODB_REPLICA_SET_MODE=primary
- MONGODB_REPLICA_SET_KEY=ThisIsAUniqueKeyThatShouldBeChanged123
volumes:
mongodata:
driver: local
Running docker-compose up
the first time results in the replica set correctly starting with a single primary node:
...
"msg":"Transition to primary complete; database writes are now permitted"
Now stop the containers (by pressing ctrl+c in cli or docker-compose stop
if running in detached mode) and run docker-compose down
Try to restart the container:
docker-compose up
Results in the following output:
..."msg":"This node is not a member of the config"
..."msg":"Replica set state transition","attr":{"newState":"REMOVED","oldState":"STARTUP"}
So it looks like the node did not start back up in primary mode. Please note that when using docker-compose stop
and then docker-compose start
it does seem to be able to restart correctly.
I should be able to restart a primary node from volume data.
Unable to restart primary node.
No response
Why should we delete all users and databses users created when cluster recovered from non-primary node? I think this is a dangerous way.
bitnami/redis-cluster:7.0
$ curl -sSL https://raw.githubusercontent.com/bitnami/bitnami-docker-redis-cluster/master/docker-compose.yml > docker-compose.yml
$ docker-compose up -d
Here is the command line to diagnose the cluster nodes
for i in {0..5}; do docker exec -it docker_redis-node-${i}_1 redis-cli cluster nodes ; done
Nominal cluster state on each node redis-cli cluster nodes
:
Sometimes cluster does not create properly when nodes starts in parallel.
(:0@0 master,noaddr , :0@0 slave,noaddr)
This led to exceptions on the ThingsBoard on the Jedis driver and the cause is not obvious to a user.
Caused by: org.springframework.dao.DataAccessResourceFailureException: Cannot obtain connection to node 7ce8a34df194ab76b44cc109a404d94cb6b3ff2as as it is not associated with a hostname!
at org.springframework.data.redis.connection.jedis.JedisClusterConnection$JedisClusterNodeResourceProvider.getConnectionForSpecificNode(JedisClusterConnection.java:968)
at org.springframework.data.redis.connection.jedis.JedisClusterConnection$JedisClusterNodeResourceProvider.getResourceForSpecificNode(JedisClusterConnection.java:943)
at org.springframework.data.redis.connection.jedis.JedisClusterConnection$JedisClusterNodeResourceProvider.getResourceForSpecificNode(JedisClusterConnection.java:900)
at org.springframework.data.redis.connection.ClusterCommandExecutor.executeMultiKeyCommandOnSingleNode(ClusterCommandExecutor.java:312)
at org.springframework.data.redis.connection.ClusterCommandExecutor.lambda$executeMultiKeyCommand$2(ClusterCommandExecutor.java:297)
The problem fires randomly, probably when an unhealthy Redis node has been chosen as the seed node to discover cluster topology.
before PR (:0@0 master,noaddr
, :0@0 slave,noaddr
):
The restart of the cluster does not help.
The issue is quite stable to reproduce.
The workaround I managed is to start Redis nodes one by one using depends_on
.
Here is my PR to the ThingsBoard where the Bitnami Redis cluster in docker-compose is used for black-box tests.
You can find the desired depends_on
lines in the files changes section.
bitnami/airflow:latest
When I put my DAG file into /opt/bitnami/airflow/dags
, I don't see it in the web UI. I searched the web but nothing worked for me.
No response
I don't see any DAG files.
No response
bitnami/apache:2.4
container restart normally
container restart failed with logs below:
apache 02:00:53.42 INFO ==> ** Starting Apache **
httpd: Syntax error on line 162 of /opt/bitnami/apache/conf/httpd.conf: Cannot load /usr/lib/apache2/modules/mod_dav_svn.so into server: /usr/lib/x86_64-linux-gnu/libsvn_subr-1.so.1: undefined symbol: apr_crypto_block_cleanup
I had tried:
Nothing helped.
The official documentation states the following:
Due to the value of the setting SecAuditLogType=Concurrent the ModSecurity log is stored in multiple files inside the directory
/var/log/audit
. The defaultSerial
value in SecAuditLogType can impact performance.
https://kubernetes.github.io/ingress-nginx/user-guide/third-party-addons/modsecurity/
It does not seem to be the case in the Bitnami image:
$ docker run --rm -ti --entrypoint= \
quay.io/bitnami/nginx-ingress-controller:1.0.4-debian-10-r8 \
grep -E '^(SecAuditLogType|SecAuditLogStorageDir)' /etc/nginx/modsecurity/modsecurity.conf
SecAuditLogType Serial
In the official image:
$ docker run --rm -ti --entrypoint= \
k8s.gcr.io/ingress-nginx/controller:v1.0.4 \
grep -E '^(SecAuditLogType|SecAuditLogStorageDir)' /etc/nginx/modsecurity/modsecurity.conf
SecAuditLogType Concurrent
SecAuditLogStorageDir /var/log/audit/
The change appear to be made by this script upstream:
https://github.com/kubernetes/ingress-nginx/blob/af7d9581f47113f4e2cfd7fac92ba02ae9cd49f0/images/nginx/rootfs/build.sh#L550-L554
It would be nice to have this discrepancy corrected in both the 0.x and 1.x Bitnami images.
bitnami/keycloak:18.0.2
Simplify docker-compose (and probably orchestrators) setup when I need to start a keycloak instance with a realm imported on startup.
I need to import a realm in json format when my KC service starts up. Essentially I wanted to run ./kc.sh -cf ... start-dev --import-realms
. The --import-realms
flag tells KC to check the $KC_HOME/data/import
directory for json files and import those into its database as realms.
I wanted to solve this problem by
volumes
section of docker-compose: mounting my realm.json
into /opt/bitnami/keycloak/data/import/realm.json
The result: the data
directory doesn't exist on the docker image by default. Hence docker-compose
created it for me in the container, but this way its owner was the root user. As a result, when I tried to log in to KC, KC couldn't create the $KC_HOME/data/tmp` directory (permission error).
Create an empty /opt/bitnami/keycloak/data
directory when building the Dockerfile so that it will be owned by the default user.
For now, I worked the problem around by:
adding these volumes to the compose file:
volumes:
- './import-realm.sh:/docker-entrypoint-initdb.d/0-import-realm.sh'
- './realm.json:/var/realm.json'
where import-realm.sh
is
#!/bin/bash
cd /opt/bitnami/keycloak
mkdir -p data/import
cp /var/realm.json data/import
But this took quite some time overall to figure out and it feels just like a sluggish workaround.
Currently, container images are published in different container registries, being Google Container Registry (GCR) and DockerHub the most popular ones.
In order to unify the source of truth, we decided to not publish new versions in GCR, keeping DockerHub as the only official registry at this time.
Helm charts use by default DockerHub (image:registry: docker.io
) which means there is not any impact in default deployments.
Please note that Bitnami is a verified publisher since the past year which means that container images under the Bitnami organization are exempt from rates limiting.
For now, and until further notice, container images already present in the gcr.io/bitnami-containers/
repositories won't be deleted, just no new ones will appear. If at some point we decide to remove this registry and its images, we will update this issue.
bitnami/openldap:2.6
Set your OpenLDAP to listen to official 636 port
Drops message about a privileged port
Has to work and not drop a message. There are 0 reasons for this behaviour. Container scripts should be smarter and see if privileged port parameter has been overidden.
Error in container log about privileged port use
bitnami/mongodb:latest
Hi, I am deploying bitnami/mongodb:latest container as Azure Container Instance (ACI) in a group using ARM template inside a vNET. The bitnami/mongodb container run as non-root user but Azure file share volume mount requires the Linux container to run as root . Looking at the documentation;
(link: https://docs.bitnami.com/tutorials/work-with-non-root-containers)
it states;
If you wish to run a Bitnami non-root container image as a root container image, you can do so by adding the line user: root right after the image: directive in the container's docker-compose.yml
How to achieve the same using ARM template since it doesn't appear to have any property for it ?
I've tried various options but nothing so far working for me e.g. one possible solution was to explore init container option;
(link: https://docs.microsoft.com/en-us/azure/container-instances/container-instances-init-container)
So I tried the Use of a InitContainer and see if I can update the permissions/ownership on the mounted volume e.g. "chown 1001:1001 -R /bitnami" but then I don't have the option to call a "depends_on" so having an image that just changes permissions wouldn't necessarily work as far a I can tell before starting MongoDB container itself, and the MonogoDB container won't be started until Init Container is finished as per Microsoft documentation
"Init containers run to completion before the application container or containers start." so in my testing this approach doesn't make any difference."
Also tried a new mount path i.e. /data/mongoaz instead of default path /data/db based on some of the suggestion online related to mongo container (not the bitnami image) but it didn't work either as this goes back to the permission issue.
Here is the error that I'm getting which I believe is expected to see in this environment unless we can fix the permissions problem.
{"t":{"$date":"2022-07-25T17:12:03.203+00:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"storage":{"dbPath":"/data/mongoaz"}}}}
{"t":{"$date":"2022-07-25T17:12:03.286+00:00"},"s":"I", "c":"STORAGE", "id":22315, "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=483M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],"}}
{"t":{"$date":"2022-07-25T17:12:04.253+00:00"},"s":"E", "c":"STORAGE", "id":22435, "ctx":"initandlisten","msg":"WiredTiger error","attr":{"error":1,"message":"[1658769124:253222][4195:0x7fc83c453c80], connection: __posix_open_file, 808: /data/mongoaz/WiredTiger.wt: handle-open: open: Operation not permitted"}}
{"t":{"$date":"2022-07-25T17:12:04.330+00:00"},"s":"E", "c":"STORAGE", "id":22435, "ctx":"initandlisten","msg":"WiredTiger error","attr":{"error":17,"message":"[1658769124:330802][4195:0x7fc83c453c80], connection: __posix_open_file, 808: /data/mongoaz/WiredTiger.wt: handle-open: open: File exists"}}
{"t":{"$date":"2022-07-25T17:12:04.354+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"unexpected file WiredTiger.wt found, renamed to WiredTiger.wt.1"}}
{"t":{"$date":"2022-07-25T17:12:04.366+00:00"},"s":"E", "c":"STORAGE", "id":22435, "ctx":"initandlisten","msg":"WiredTiger error","attr":{"error":1,"message":"[1658769124:366555][4195:0x7fc83c453c80], connection: __posix_open_file, 808: /data/mongoaz/WiredTiger.wt: handle-open: open: Operation not permitted"}}
{"t":{"$date":"2022-07-25T17:12:04.438+00:00"},"s":"E", "c":"STORAGE", "id":22435, "ctx":"initandlisten","msg":"WiredTiger error","attr":{"error":17,"message":"[1658769124:438502][4195:0x7fc83c453c80], connection: __posix_open_file, 808: /data/mongoaz/WiredTiger.wt: handle-open: open: File exists"}}
{"t":{"$date":"2022-07-25T17:12:04.460+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"unexpected file WiredTiger.wt found, renamed to WiredTiger.wt.2"}}
{"t":{"$date":"2022-07-25T17:12:04.473+00:00"},"s":"E", "c":"STORAGE", "id":22435, "ctx":"initandlisten","msg":"WiredTiger error","attr":{"error":1,"message":"[1658769124:473025][4195:0x7fc83c453c80], connection: __posix_open_file, 808: /data/mongoaz/WiredTiger.wt: handle-open: open: Operation not permitted"}}
{"t":{"$date":"2022-07-25T17:12:04.480+00:00"},"s":"W", "c":"STORAGE", "id":22347, "ctx":"initandlisten","msg":"Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade."}
{"t":{"$date":"2022-07-25T17:12:04.480+00:00"},"s":"F", "c":"STORAGE", "id":28595, "ctx":"initandlisten","msg":"Terminating.","attr":{"reason":"1: Operation not permitted"}}
{"t":{"$date":"2022-07-25T17:12:04.480+00:00"},"s":"F", "c":"-", "id":23091, "ctx":"initandlisten","msg":"Fatal assertion","attr":{"msgid":28595,"file":"src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp","line":687}}
{"t":{"$date":"2022-07-25T17:12:04.481+00:00"},"s":"F", "c":"-", "id":23092, "ctx":"initandlisten","msg":"\n\n***aborting after fassert() failure\n\n"}
I trying to achieve both Azure file share mounted volume and mongo authentication enabled in same ARM template file, can this be done?
Azure File Share Mounted with bitnami/mongodb container that is running as non-root.
Container is crashing as per the error details above.
I can share the my ARM template if needed.
Description
I'm trying to specify a custom pg_hba.conf file for my postresql-repmgr cluster consisting of two nodes in streaming replication setup.
Unfortunately I cannot use the bind mount approach from the README as I'm using docker swarm.
Therefore I've tried two other approaches -- both without success.
/bitnami/repmgr/conf/pg_hba.conf
using the docker swarm config mechanism./bitnami/repmgr/conf
as a volume with the nfs driver.Approach 1
The excerpt from my compose file:
[...]
services:
pg-0:
image: bitnami/postgresql-repmgr:12.3.0
[...]
configs:
- source: pg_hba.conf
target: /bitnami/repmgr/conf/pg_hba.conf
uid: "1001"
gid: "0"
mode: 0774
[...]
configs:
pg_hba.conf:
file: pg_hba.conf
[...]
The resulting error message (from my log server):
"2020-05-25T10:58:02.049Z","t460s-dockerswarm-2","[38;5;6mrepmgr �[38;5;5m10:58:02.04 �[0m�[38;5;2mINFO �[0m ==> Preparing PostgreSQL configuration..."
"2020-05-25T10:58:02.155Z","t460s-dockerswarm-2","[38;5;6mpostgresql �[38;5;5m10:58:02.15 �[0m�[38;5;2mINFO �[0m ==> Stopping PostgreSQL..."
"2020-05-25T10:58:02.152Z","t460s-dockerswarm-2","cp: cannot create regular file '/bitnami/postgresql/conf/pg_hba.conf': Permission denied"
"2020-05-25T10:58:02.034Z","t460s-dockerswarm-2","[38;5;6mrepmgr �[38;5;5m10:58:02.03 �[0m�[38;5;2mINFO �[0m ==> There are no nodes with primary role. Assuming the primary role..."
"2020-05-25T10:58:02.056Z","t460s-dockerswarm-2","[38;5;6mpostgresql �[38;5;5m10:58:02.05 �[0m�[38;5;2mINFO �[0m ==> postgresql.conf file not detected. Generating it..."
Notes:
user: root
to my services to circumvent issues due to the non-root container I encounter the same password authentication issue as in approach 2.Approach 2
The excerpt from my compose file:
[...]
services:
pg-0:
image: bitnami/postgresql-repmgr:12.3.0
[...]
volumes:
- pg-primary-vol:/bitnami/postgresql
- pg-config-vol:/bitnami/repmgr/conf/
[...]
volumes:
pg-primary-vol:
pg-config-vol:
driver: local
driver_opts:
type: "nfs"
o: "nfsvers=4,addr=192.168.137.110,rw"
device: ":/mnt/storage1/postgresql/conf"
[...]
The resulting error message (from docker service logs):
postgres_pg-0.1.akaolj1sa768@t460s-dockerswarm-2 | [2020-05-25 11:33:45] [NOTICE] repmgrd (repmgrd 5.1.0) starting up
postgres_pg-0.1.akaolj1sa768@t460s-dockerswarm-2 | [2020-05-25 11:33:45] [INFO] connecting to database "user=repmgr password=repmgr host=pg-0 dbname=repmgr port=5432 connect_timeout=5"
postgres_pg-0.1.akaolj1sa768@t460s-dockerswarm-2 | [2020-05-25 11:33:45] [DEBUG] connecting to: "user=repmgr password=repmgr connect_timeout=5 dbname=repmgr host=pg-0 port=5432 fallback_application_name=repmgr"
postgres_pg-0.1.akaolj1sa768@t460s-dockerswarm-2 | [2020-05-25 11:33:45] [ERROR] connection to database failed
postgres_pg-0.1.akaolj1sa768@t460s-dockerswarm-2 | [2020-05-25 11:33:45] [DETAIL]
postgres_pg-0.1.akaolj1sa768@t460s-dockerswarm-2 | FATAL: password authentication failed for user "repmgr"
postgres_pg-0.1.akaolj1sa768@t460s-dockerswarm-2 |
postgres_pg-0.1.akaolj1sa768@t460s-dockerswarm-2 | [2020-05-25 11:33:45] [DETAIL] attempted to connect using:
postgres_pg-0.1.akaolj1sa768@t460s-dockerswarm-2 | user=repmgr password=repmgr connect_timeout=5 dbname=repmgr host=pg-0 port=5432 fallback_application_name=repmgr
Notes:
Approach 3
I've taken the example docker-compose.yml file and added this bind mount to both postgres services:
volumes:
- ./conf:/bitnami/repmgr/conf/
Of course I've also set the correct permissions on the host holder and its content.
As long as the folder is empty everything works fine. As soon as I add the pg_hba.conf the start of the primary container fails with error 2 on the first run:
[...]
pg-0_1 | postgresql 20:17:35.41 INFO ==> Initializing PostgreSQL database...
pg-0_1 | postgresql 20:17:35.41 INFO ==> Custom configuration /opt/bitnami/postgresql/conf/postgresql.conf detected
pg-0_1 | postgresql 20:17:35.42 INFO ==> Custom configuration /opt/bitnami/postgresql/conf/pg_hba.conf detected
pg-0_1 | postgresql 20:17:36.89 INFO ==> Starting PostgreSQL in background...
pg-0_1 | postgresql 20:17:37.03 INFO ==> Changing password of postgres
pg-0_1 | postgresql 20:17:37.05 INFO ==> Stopping PostgreSQL...
postgres_pg-0_1 exited with code 2
On the second and each subsequent run (without deleting my volumes) I get this already known error:
[...]
pg-0_1 | postgresql 20:17:55.19 INFO ==> Deploying PostgreSQL with persisted data...
pg-1_1 | repmgr 20:17:55.20 INFO ==> Preparing repmgr configuration...
pg-1_1 | repmgr 20:17:55.21 INFO ==> Initializing Repmgr...
pg-1_1 | repmgr 20:17:55.22 INFO ==> Waiting for primary node...
pg-0_1 | postgresql 20:17:55.22 INFO ==> Stopping PostgreSQL...
pg-0_1 | postgresql-repmgr 20:17:55.23 INFO ==> ** PostgreSQL with Replication Manager setup finished! **
pg-0_1 |
pg-0_1 | postgresql 20:17:55.30 INFO ==> Starting PostgreSQL in background...
pg-0_1 | postgresql-repmgr 20:17:55.43 INFO ==> ** Starting repmgrd **
pg-0_1 | [2020-05-25 20:17:55] [NOTICE] repmgrd (repmgrd 5.1.0) starting up
pg-0_1 | [2020-05-25 20:17:55] [INFO] connecting to database "user=repmgr password=repmgr host=pg-0 dbname=repmgr port=5432 connect_timeout=5"
pg-0_1 | [2020-05-25 20:17:55] [DEBUG] connecting to: "user=repmgr password=repmgr connect_timeout=5 dbname=repmgr host=pg-0 port=5432 fallback_application_name=repmgr"
pg-0_1 | [2020-05-25 20:17:55] [ERROR] connection to database failed
pg-0_1 | [2020-05-25 20:17:55] [DETAIL]
pg-0_1 | FATAL: password authentication failed for user "repmgr"
pg-0_1 |
pg-0_1 | [2020-05-25 20:17:55] [DETAIL] attempted to connect using:
pg-0_1 | user=repmgr password=repmgr connect_timeout=5 dbname=repmgr host=pg-0 port=5432 fallback_application_name=repmgr
postgres_pg-0_1 exited with code 6
pg-1_1 | postgresql 20:19:56.06 INFO ==> Stopping PostgreSQL...
postgres_pg-1_1 exited with code 1
General steps to reproduce the issues in approach 1 and 2
When modifying and testing my docker-compose.yml file I use these steps to have a clean setup each time:
docker stack rm postgres
docker volume prune
on all involved docker nodesdocker stack deploy --compose-file docker-compose.yml postgres
The full docker-compose.yml file used in approach 1 and 2
(currently approach 2 is active by outcommenting approach 1 elements):
---
version: "3.8"
services:
pg-0:
image: bitnami/postgresql-repmgr:12.3.0
environment:
- POSTGRESQL_PASSWORD_FILE=/run/secrets/postgres_password
- REPMGR_PARTNER_NODES=pg-0,pg-1
- REPMGR_NODE_NAME=pg-0
- REPMGR_NODE_NETWORK_NAME=pg-0
- REPMGR_PRIMARY_HOST=pg-0
- REPMGR_PASSWORD_FILE=/run/secrets/repmgr_password
- REPMGR_LOG_LEVEL=DEBUG
volumes:
- pg-primary-vol:/bitnami/postgresql
- pg-config-vol:/bitnami/repmgr/conf/
- type: tmpfs
target: /dev/shm
tmpfs:
size: 256000000
ports:
- "5432:5432"
networks:
- application-net
deploy:
placement:
constraints:
- node.labels.type == primary
- node.role == worker
#endpoint_mode: dnsrr
configs:
- source: additional-postgresql.conf
target: /bitnami/postgresql/conf/conf.d/additional-postgresql.conf
#- source: pg_hba.conf
# target: /bitnami/repmgr/conf/pg_hba.conf
# uid: "1001"
# gid: "0"
# mode: 0774
secrets:
- postgres_password
- repmgr_password
#logging:
# driver: gelf
# options:
# gelf-address: 'tcp://192.168.137.101:12201'
pg-1:
image: bitnami/postgresql-repmgr:12.3.0
environment:
- POSTGRESQL_PASSWORD_FILE=/run/secrets/postgres_password
- REPMGR_PARTNER_NODES=pg-0,pg-1
- REPMGR_NODE_NAME=pg-1
- REPMGR_NODE_NETWORK_NAME=pg-1
- REPMGR_PRIMARY_HOST=pg-0
- REPMGR_PASSWORD_FILE=/run/secrets/repmgr_password
volumes:
- pg-replica-vol:/bitnami/postgresql
- pg-config-vol:/bitnami/repmgr/conf/
- type: tmpfs
target: /dev/shm
tmpfs:
size: 256000000
ports:
- "5433:5432"
networks:
- application-net
deploy:
placement:
constraints:
- node.labels.type != primary
- node.role == worker
#endpoint_mode: dnsrr
configs:
- source: additional-postgresql.conf
target: /bitnami/postgresql/conf/conf.d/additional-postgresql.conf
#- source: pg_hba.conf
# target: /bitnami/repmgr/conf/pg_hba.conf
# uid: "1001"
# gid: "0"
# mode: 0774
secrets:
- postgres_password
- repmgr_password
#logging:
# driver: gelf
# options:
# gelf-address: 'tcp://192.168.137.101:12201'
networks:
application-net:
driver: overlay
driver_opts:
encrypted: "true"
volumes:
pg-primary-vol:
pg-replica-vol:
pg-config-vol:
driver: local
driver_opts:
type: "nfs"
o: "nfsvers=4,addr=192.168.137.110,rw"
device: ":/mnt/storage1/postgresql/conf"
configs:
additional-postgresql.conf:
file: additional-postgresql.conf
name: additional-postgresql.conf-${ADDITIONAL_POSTGRES_CONF}
pg_hba.conf:
file: pg_hba.conf
name: pg_hba.conf-${PG_HBA_CONF}
secrets:
postgres_password:
external: true
repmgr_password:
external: true
Output of docker version
Client: Docker Engine - Community
Version: 19.03.8
API version: 1.40
Go version: go1.12.17
Git commit: afacb8b7f0
Built: Wed Mar 11 01:25:56 2020
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.8
API version: 1.40 (minimum version 1.12)
Go version: go1.12.17
Git commit: afacb8b7f0
Built: Wed Mar 11 01:24:28 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
Output of docker info
Client:
Debug Mode: false
Server:
Containers: 3
Running: 1
Paused: 0
Stopped: 2
Images: 8
Server Version: 19.03.8
Storage Driver: overlay2
Backing Filesystem: <unknown>
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: active
NodeID: t1hnu0591w01txen8chuue8y0
Is Manager: true
ClusterID: lv4nsc1znjt3nvuam6sr4jgt7
Managers: 1
Nodes: 3
Default Address Pool: 10.0.0.0/8
SubnetSize: 24
Data Path Port: 4789
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 10
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 192.168.137.101
Manager Addresses:
192.168.137.101:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
init version: fec3683
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.19.0-9-amd64
Operating System: Debian GNU/Linux 10 (buster)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.886GiB
Name: t460s-dockerswarm-1
ID: HZD6:A6CE:KFV6:7YME:DOSW:QUPB:NVVH:CZBC:3KWJ:BMHO:EY6S:JMHF
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
Description
Cluster member is bootstrapping when no host (from clusterAddress=gcom://... ) is set in /etc/hosts when we want to have clusterBootstrap=no
this can be seen in the output:
2020-07-28 9:25:48 0 [Note] WSREP: Service thread queue flushed.
2020-07-28 9:25:48 0 [Note] WSREP: ####### Assign initial position for certification: 4c4debf7-d0b4-11ea-81f5-9778c5f86c84:12, protocol version: -1
2020-07-28 9:25:48 0 [Note] WSREP: Start replication
2020-07-28 9:25:48 0 [Note] WSREP: Connecting with bootstrap option: 1
2020-07-28 9:25:48 0 [Note] WSREP: Setting GCS initial position to 4c4debf7-d0b4-11ea-81f5-9778c5f86c84:12
2020-07-28 9:25:48 0 [Note] WSREP: protonet asio version 0
2020-07-28 9:25:48 0 [Note] WSREP: Using CRC-32C for message checksums.
2020-07-28 9:25:48 0 [Note] WSREP: backend: asio
2020-07-28 9:25:48 0 [Note] WSREP: gcomm thread scheduling priority set to other:0
2020-07-28 9:25:48 0 [Warning] WSREP: access file(/bitnami/mariadb/data//gvwstate.dat) failed(No such file or directory)
--> bootstrap option: 1 instead of 0
Steps to reproduce the issue:
have an empty /etc/hosts like:
root@host2:/opt/secon/docker-containers/mariadb# cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 host2
docker-compose.yml:
root@host2:/opt/secon/docker-containers/mariadb# cat docker-compose.yml
version: "3.7"
services:
mariadb:
image: bitnami/mariadb-galera
container_name: mariadb
env_file:
- ./secret-mariadb.env
- /data/docker/mariadb/mariadb.env
network_mode: "host"
ports:
- 3306:3306
- 4567:4567
restart: unless-stopped
volumes:
- /data/docker/mariadb/volumes/bitnami/mariadb:/bitnami/mariadb
(note: network_mode host does not makes any difference)
env file1: secret-mariadb.env
root@host2:/opt/secon/docker-containers/mariadb# cat secret-mariadb.env
MARIADB_ROOT_PASSWORD=abcd
MARIADB_USER=user
MARIADB_PASSWORD=userpassword
MARIADB_GALERA_MARIABACKUP_USER=backupser
MARIADB_GALERA_MARIABACKUP_PASSWORD=backuppass
env file2: /data/docker/mariadb/mariadb.env
root@host2:/opt/secon/docker-containers/mariadb# cat /data/docker/mariadb/mariadb.env
MARIADB_GALERA_CLUSTER_ADDRESS=gcomm://198.18.1.1:4567,198.19.121.176:4567,0.0.0.0:4567
MARIADB_GALERA_CLUSTER_NAME=secondb
MARIADB_GALERA_NODE_ADDRESS=198.18.184.74
Describe the results you received:*
2020-07-28 9:25:48 0 [Note] WSREP: ####### Assign initial position for certification: 4c4debf7-d0b4-11ea-81f5-9778c5f86c84:12, protocol version: -1
2020-07-28 9:25:48 0 [Note] WSREP: Start replication
2020-07-28 9:25:48 0 [Note] WSREP: Connecting with bootstrap option: 1
2020-07-28 9:25:48 0 [Note] WSREP: Setting GCS initial position to 4c4debf7-d0b4-11ea-81f5-9778c5f86c84:12
2020-07-28 9:25:48 0 [Note] WSREP: protonet asio version 0
2020-07-28 9:25:48 0 [Note] WSREP: Using CRC-32C for message checksums.
2020-07-28 9:25:48 0 [Note] WSREP: backend: asio
2020-07-28 9:25:48 0 [Note] WSREP: gcomm thread scheduling priority set to other:0
Describe the results you expected:
2020-07-28 9:31:01 0 [Note] WSREP: Start replication
2020-07-28 9:31:01 0 [Note] WSREP: Connecting with bootstrap option: 0
2020-07-28 9:31:01 0 [Note] WSREP: Setting GCS initial position to 00000000-0000-0000-0000-000000000000:-1
2020-07-28 9:31:01 0 [Note] WSREP: protonet asio version 0
2020-07-28 9:31:01 0 [Note] WSREP: Using CRC-32C for message checksums.
2020-07-28 9:31:01 0 [Note] WSREP: backend: asio
2020-07-28 9:31:01 0 [Note] WSREP: gcomm thread scheduling priority set to other:0
2020-07-28 9:31:01 0 [Warning] WSREP: access file(/bitnami/mariadb/data//gvwstate.dat) failed(No such file or directory)
Additional information you deem important (e.g. issue happens only occasionally):
added/extended host to /etc/hosts:
root@host2:/opt/secon/docker-containers/mariadb# cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 host2
198.19.121.176 host1
Version
docker version
:root@host2:/opt/secon/docker-containers/mariadb# docker version
Client: Docker Engine - Community
Version: 19.03.12
API version: 1.40
Go version: go1.13.10
Git commit: 48a66213fe
Built: Mon Jun 22 15:45:44 2020
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.12
API version: 1.40 (minimum version 1.12)
Go version: go1.13.10
Git commit: 48a66213fe
Built: Mon Jun 22 15:44:15 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
docker info
:root@host2:/opt/secon/docker-containers/mariadb# docker info
Client:
Debug Mode: false
Server:
Containers: 2
Running: 1
Paused: 0
Stopped: 1
Images: 39
Server Version: 19.03.12
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
init version: fec3683
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 5.4.0-42-generic
Operating System: Ubuntu 20.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 58.88GiB
Name: host2
ID: HCLY:7DTL:UWCH:KIWS:FJCR:ZUVR:Z74C:HK4S:I5PL:67PL:XRWE:YNYC
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
WARNING: IPv4 forwarding is disabled
docker-compose version
(if applicable):root@host2:/opt/secon/docker-containers/mariadb# docker-compose version
docker-compose version 1.25.0, build unknown
docker-py version: 4.1.0
CPython version: 3.8.2
OpenSSL version: OpenSSL 1.1.1f 31 Mar 2020
Additional environment details (AWS, VirtualBox, Docker for MAC, physical, etc.):
** suspected bug **
I suspect a bug in the file: /opt/bitnami/scripts/libmariadbgalera.sh
get_galera_cluster_bootstrap_value() {
local clusterBootstrap
local local_ip
local host_ip
# This block evaluate if the cluster needs to be boostraped or not.
# When the node is marked to bootstrap:
# - We want to have bootstrap enabled when executing up to "run.sh" (included), for the first time.
# To do this, we check if the node has already been initialized before with "get_previous_boot".
# - For the second "setup.sh" and "run.sh" calls, it will automatically detect the cluster was already bootstrapped, so it disables it.
# That way, the node will join the existing Galera cluster instead of bootstrapping a new one.
# We disable the bootstrap right after processing environment variables in "run.sh" with "set_previous_boot".
# - Users can force a bootstrap to happen again on a node, by setting the environment variable "MARIADB_GALERA_FORCE_SAFETOBOOTSTRAP".
# When the node is not marked to bootstrap, the node will join an existing cluster.
clusterBootstrap="$DB_GALERA_CLUSTER_BOOTSTRAP"
if is_boolean_yes "$clusterBootstrap"; then
if is_boolean_yes "$(get_previous_boot)"; then
clusterBootstrap="no"
fi
else
local clusterAddress
clusterAddress="$DB_GALERA_CLUSTER_ADDRESS" # clusterAddress=gcomm://198.18.1.1:4567,198.19.121.176:4567,0.0.0.0:4567
if [[ -z "$clusterAddress" ]]; then
clusterBootstrap="yes"
elif [[ -n "$clusterAddress" ]]; then
clusterBootstrap="yes" # culprit will be set here
local_ip=$(hostname -i)
read -r -a hosts <<< "$(tr ',' ' ' <<< "${clusterAddress#*://}")"
if [[ "${#hosts[@]}" -eq "1" ]]; then
read -r -a cluster_ips <<< "$(getent hosts "${hosts[0]}" | awk '{print $1}' | tr '\n' ' ')"
if [[ "${#cluster_ips[@]}" -gt "1" ]] || ( [[ "${#cluster_ips[@]}" -eq "1" ]] && [[ "${cluster_ips[0]}" != "$local_ip" ]] ) ; then
clusterBootstrap="no"
else
clusterBootstrap="yes"
fi
else
for host in "${hosts[@]}"; do
host_ip=$(getent hosts "${host%:*}" | awk '{print $1}')
if [[ -n "$host_ip" ]] && [[ "$host_ip" != "$local_ip" ]]; then
clusterBootstrap="no"
fi
done
fi
fi
fi
echo "$clusterBootstrap"
}
I assume the issue already where marked with 'culprit'.
Description
Our vulnerability scanning tool detects 2 versions of GO within the mariadb docker image 10.4.21-debian-10-r32.
This leads to have same CVE raised 2 times, one for each version.
Steps to reproduce the issue:
Launching the scanning tool on bitnami mariadb docker image
Describe the results you received:
Extract from the tool result:
"applications": [
{
"name": "go",
"version": "1.16.7",
"path": "/opt/bitnami/common/bin/gosu"
},
{
"name": "go",
"version": "1.16.6",
"path": "/opt/bitnami/common/bin/ini-file"
}
Describe the results you expected:
Expecting only 1 version of go installed on mariadb bitnami docker image
bitnami/bitnami-docker-redis
I am getting
mkdir: cannot create directory '/opt/bitnami/redis/tmp' : Permission Denied
I have a limitation that i can't give permissions to root though
Can we somehow override this location?
No response
Permission denied
No response
Description
During persistence initialization, if the copy is too long and is killed by the container startupProbe, at next start the wordpress_initialize function do not detect a partially initialized directory
Steps to reproduce the issue:
Describe the results you received:
Errors on Wordpress due to missing plugins
Describe the results you expected:
Atomicity of the wordpress_initialize, for example creating .wordpress_initialize
file in the volume at the end of the function, and checking the file existence to detect fully initalized persisent volume.
Workaround:
Increase startupProbe failureThreshold
bitnami/redis:6.2
Redis 6 or higher version arrives with multithreading for faster I/O. It's beneficial in many production environments. Of course I know that custom my own redis.conf can be do it well, but maybe put it in env just easy for it.
Add "io-threads" and "io-threads-do-reads" environment variables to redis-env.sh.
Something like docker run -it -e IO_THREADS=4 -e IO_THREADS_DO_READS=yes xxx
No response
bitnami/fluentd:latest
This is the optional dependency of https://github.com/uken/fluent-plugin-elasticsearch#http_backend
But it helps to fix this problem uken/fluent-plugin-elasticsearch#453 (comment)
Also, want to mention, that the optional gem oj
is included in the package, so why not install typhoeus
gem? (it's pretty small)
install typhoeus gem
Description
Master db and slave be deployed on different servers. Both master and slave synchronize data normally. When the primary node shuts down, the standby node switches normally. But when the master node starts again, PostgreSQL cannot start normally
Steps to reproduce the issue:
Describe the results you received:
log
pam-pgsql-0_1 | postgresql-repmgr 08:43:50.93 INFO ==> ** Starting PostgreSQL with Replication Manager setup **
pam-pgsql-0_1 | postgresql-repmgr 08:43:50.96 INFO ==> Validating settings in REPMGR_* env vars...
pam-pgsql-0_1 | postgresql-repmgr 08:43:50.97 INFO ==> Validating settings in POSTGRESQL_* env vars..
pam-pgsql-0_1 | postgresql-repmgr 08:43:50.98 INFO ==> Querying all partner nodes for common upstream node...
pam-pgsql-0_1 | postgresql-repmgr 08:43:51.05 INFO ==> Auto-detected primary node: '10.47.154.107:5432'
pam-pgsql-0_1 | postgresql-repmgr 08:43:51.06 INFO ==> Preparing PostgreSQL configuration...
pam-pgsql-0_1 | postgresql-repmgr 08:43:51.07 INFO ==> postgresql.conf file not detected. Generating it...
pam-pgsql-0_1 | postgresql-repmgr 08:43:51.13 INFO ==> Preparing repmgr configuration...
pam-pgsql-0_1 | postgresql-repmgr 08:43:51.13 INFO ==> Initializing Repmgr...
pam-pgsql-0_1 | postgresql-repmgr 08:43:51.14 INFO ==> Waiting for primary node...
pam-pgsql-0_1 | postgresql-repmgr 08:43:51.17 INFO ==> Cloning data from primary node...
pam-pgsql-0_1 | postgresql-repmgr 08:43:51.88 INFO ==> Initializing PostgreSQL database...
pam-pgsql-0_1 | postgresql-repmgr 08:43:51.89 INFO ==> Cleaning stale /bitnami/postgresql/data/standby.signal file
pam-pgsql-0_1 | postgresql-repmgr 08:43:51.90 INFO ==> Custom configuration /opt/bitnami/postgresql/conf/postgresql.conf detected
pam-pgsql-0_1 | postgresql-repmgr 08:43:51.90 INFO ==> Custom configuration /opt/bitnami/postgresql/conf/pg_hba.conf detected
pam-pgsql-0_1 | postgresql-repmgr 08:43:51.92 INFO ==> Deploying PostgreSQL with persisted data...
pam-pgsql-0_1 | postgresql-repmgr 08:43:51.94 INFO ==> Configuring replication parameters
pam-pgsql-0_1 | postgresql-repmgr 08:43:51.96 INFO ==> Configuring fsync
pam-pgsql-0_1 | postgresql-repmgr 08:43:51.99 INFO ==> Setting up streaming replication slave...
pam-pgsql-0_1 | postgresql-repmgr 08:43:52.02 INFO ==> Starting PostgreSQL in background...
pam-pgsql-0_1 | postgresql-repmgr 08:43:52.16 INFO ==> Unregistering standby node...
pam-pgsql-0_1 | postgresql-repmgr 08:43:52.28 INFO ==> Registering Standby node...
pam-pgsql-0_1 | postgresql-repmgr 08:43:52.33 INFO ==> Stopping PostgreSQL...
pam-pgsql-0_1 | postgresql-repmgr 08:43:53.35 INFO ==> ** PostgreSQL with Replication Manager setup finished! **
pam-pgsql-0_1 |
pam-pgsql-0_1 | postgresql-repmgr 08:43:53.38 INFO ==> Starting PostgreSQL in background...
pam-pgsql-0_1 | postgresql-repmgr 08:43:53.52 INFO ==> ** Starting repmgrd **
pam-pgsql-0_1 | [2020-10-17 08:43:53] [NOTICE] repmgrd (repmgrd 5.1.0) starting up
pam-pgsql-0_1 | [2020-10-17 08:43:53] [ERROR] PID file "/opt/bitnami/repmgr/tmp/repmgr.pid" exists and seems to contain a valid PID
pam-pgsql-0_1 | [2020-10-17 08:43:53] [HINT] if repmgrd is no longer alive, remove the file and restart repmgrd
pam-pgsql-0_1 | postgresql-repmgr 08:43:55.70
pam-pgsql-0_1 | postgresql-repmgr 08:43:55.70 Welcome to the Bitnami postgresql-repmgr container
pam-pgsql-0_1 | postgresql-repmgr 08:43:55.70 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql-repmgr
pam-pgsql-0_1 | postgresql-repmgr 08:43:55.70 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql-repmgr/issues
pam-pgsql-0_1 | postgresql-repmgr 08:43:55.71
pam-pgsql-0_1 | postgresql-repmgr 08:43:55.72 INFO ==> ** Starting PostgreSQL with Replication Manager setup **
pam-pgsql-0_1 | postgresql-repmgr 08:43:55.75 INFO ==> Validating settings in REPMGR_* env vars...
pam-pgsql-0_1 | postgresql-repmgr 08:43:55.76 INFO ==> Validating settings in POSTGRESQL_* env vars..
pam-pgsql-0_1 | postgresql-repmgr 08:43:55.76 INFO ==> Querying all partner nodes for common upstream node...
pam-pgsql-0_1 | postgresql-repmgr 08:43:55.83 INFO ==> Auto-detected primary node: '10.47.154.107:5432'
pam-pgsql-0_1 | postgresql-repmgr 08:43:55.84 INFO ==> Preparing PostgreSQL configuration...
pam-pgsql-0_1 | postgresql-repmgr 08:43:55.84 INFO ==> postgresql.conf file not detected. Generating it...
pam-pgsql-0_1 | postgresql-repmgr 08:43:55.91 INFO ==> Preparing repmgr configuration...
pam-pgsql-0_1 | postgresql-repmgr 08:43:55.92 INFO ==> Initializing Repmgr...
pam-pgsql-0_1 | postgresql-repmgr 08:43:55.92 INFO ==> Waiting for primary node...
pam-pgsql-0_1 | postgresql-repmgr 08:43:55.96 INFO ==> Cloning data from primary node...
pam-pgsql-0_1 | postgresql-repmgr 08:43:56.68 INFO ==> Initializing PostgreSQL database...
pam-pgsql-0_1 | postgresql-repmgr 08:43:56.68 INFO ==> Cleaning stale /bitnami/postgresql/data/standby.signal file
pam-pgsql-0_1 | postgresql-repmgr 08:43:56.69 INFO ==> Custom configuration /opt/bitnami/postgresql/conf/postgresql.conf detected
pam-pgsql-0_1 | postgresql-repmgr 08:43:56.69 INFO ==> Custom configuration /opt/bitnami/postgresql/conf/pg_hba.conf detected
pam-pgsql-0_1 | postgresql-repmgr 08:43:56.71 INFO ==> Deploying PostgreSQL with persisted data...
pam-pgsql-0_1 | postgresql-repmgr 08:43:56.73 INFO ==> Configuring replication parameters
pam-pgsql-0_1 | postgresql-repmgr 08:43:56.75 INFO ==> Configuring fsync
pam-pgsql-0_1 | postgresql-repmgr 08:43:56.77 INFO ==> Setting up streaming replication slave...
pam-pgsql-0_1 | postgresql-repmgr 08:43:56.80 INFO ==> Starting PostgreSQL in background...
pam-pgsql-0_1 | postgresql-repmgr 08:43:56.92 INFO ==> Unregistering standby node...
pam-pgsql-0_1 | postgresql-repmgr 08:43:56.98 INFO ==> Registering Standby node...
pam-pgsql-0_1 | postgresql-repmgr 08:43:57.03 INFO ==> Stopping PostgreSQL...
pam-pgsql-0_1 | postgresql-repmgr 08:43:58.05 INFO ==> ** PostgreSQL with Replication Manager setup finished! **
pam-pgsql-0_1 |
pam-pgsql-0_1 | postgresql-repmgr 08:43:58.07 INFO ==> Starting PostgreSQL in background...
pam-pgsql-0_1 | postgresql-repmgr 08:43:58.20 INFO ==> ** Starting repmgrd **
pam-pgsql-0_1 | [2020-10-17 08:43:58] [NOTICE] repmgrd (repmgrd 5.1.0) starting up
pam-pgsql-0_1 | [2020-10-17 08:43:58] [ERROR] PID file "/opt/bitnami/repmgr/tmp/repmgr.pid" exists and seems to contain a valid PID
pam-pgsql-0_1 | [2020-10-17 08:43:58] [HINT] if repmgrd is no longer alive, remove the file and restart repmgrd
pam-pgsql-0_1 | postgresql-repmgr 08:44:01.82
pam-pgsql-0_1 | postgresql-repmgr 08:44:01.82 Welcome to the Bitnami postgresql-repmgr container
Describe the results you expected:
A starts again, postgresql starts and reconnects as a standby node.
Additional information you deem important (e.g. issue happens only occasionally):
when I remove /opt/bitnami/repmgr/tmp/repmgr.pid
, everything is back to normal.
bitnami/openldap:2.6.3
Hey
Hi have setup openldap with this docker-compose
version: "3.9"
volumes:
openldap_data:
services:
openldap:
image: bitnami/openldap:2
ports:
- 1389:1389
- 1636:1636
environment:
- LDAP_ROOT=dc=example,dc=com
- LDAP_ADMIN_USERNAME=admin
- LDAP_ADMIN_PASSWORD=adminpassword
volumes:
- openldap_data:/bitnami/openldap
So execute this:
dn: ou=groups,dc=example,dc=com
objectclass: organizationalUnit
ou: groups
dn: ou=users,dc= example,dc=com
objectclass: organizationalUnit
ou: users
dn: cn=user01,ou=users,dc= example,dc=com
cn: user01,
objectclass: inetOrgPerson
objectclass: top
sn: bar01
uid: user01
dn: cn=group01,ou=groups,dc= example,dc=com
cn: group01
member: cn= user01,ou=users,dc= example,dc=com
objectclass: groupOfNames
objectclass: top
dn: cn= group02,ou=groups,dc= example,dc=com
cn: group02
objectclass: groupOfUniqueNames
objectclass: top
uniquemember: cn= user01,ou=users,dc= example,dc=com
when i get user01, memberOf attribute does not set
dn: cn=user01,ou=users,dc= example,dc=com
cn: user01
createtimestamp: 20220719080619Z
creatorsname: cn=admin,dc=example,dc=com
entrycsn: 20220719080619.489607Z#000000#000#000000
entrydn: cn= user01,ou=users,dc= example,dc=com
entryuuid: 6a597960-9b85-103c-89c9-2986caa8c126
hassubordinates: FALSE
modifiersname: cn=admin,dc= example,dc=com
modifytimestamp: 20220719080619Z
objectclass: inetOrgPerson
objectclass: top
structuralobjectclass: inetOrgPerson
subschemasubentry: cn=Subschema
uid: user01
I want to see memberOf attribute also:
dn: cn=user01,ou=users,dc= example,dc=com
memberof: cn=group01,ou=groups,dc= example,dc=com
memberof: cn= group02,ou=groups,dc=example,dc=com
...
Thanks for your help
bitnami/redis-cluster:6.2
My work laptop is an m1 macbook and running the cluster with the amd64 architecture causes timeouts when running tests locally with the image. Providing both amd64 and arm64 would alleviate this problem.
Build and push at least arm64 and amd64 via the multiarch build support:
https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/
I'm considering forking and building them myself.
Description
Steps to reproduce the issue:
Describe the results you received:
When I check the UI from the external IP that I received above, I do not see the collections dropdown. Core is created using the environement parameter i had provided.
But I'm unable to create a new collection.
Describe the results you expected:
I expect to see the solr collections dropdown as shown below. This is not showing:
Additional information you deem important (e.g. issue happens only occasionally):
Version
docker version
:Client: Docker Engine - Community
Cloud integration: 1.0.4
Version: 20.10.0
API version: 1.41
Go version: go1.13.15
Git commit: 7287ab3
Built: Tue Dec 8 18:55:43 2020
OS/Arch: darwin/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.0
API version: 1.41 (minimum version 1.12)
Go version: go1.13.15
Git commit: eeddea2
Built: Tue Dec 8 18:58:04 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.4.3
GitCommit: 269548fa27e0089a8b8278fc4fc781d7f65a939b
runc:
Version: 1.0.0-rc92
GitCommit: ff819c7e9184c13b7c2607fe6c30ae19403a7aff
docker-init:
Version: 0.19.0
GitCommit: de40ad0
docker info
:Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Build with BuildKit (Docker Inc., v0.4.2-docker)
scan: Docker Scan (Docker Inc., v0.5.0)
Server:
Containers: 10
Running: 0
Paused: 0
Stopped: 10
Images: 55
Server Version: 20.10.0
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 269548fa27e0089a8b8278fc4fc781d7f65a939b
runc version: ff819c7e9184c13b7c2607fe6c30ae19403a7aff
init version: de40ad0
Security Options:
seccomp
Profile: default
Kernel Version: 4.19.121-linuxkit
Operating System: Docker Desktop
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 1.941GiB
Name: docker-desktop
ID: AUWP:DEDG:TRTU:M3TW:JNIU:ODLK:3GTV:SDKS:X6DT:LZDF:XDU7:U7Q3
Docker Root Dir: /var/lib/docker
Debug Mode: false
HTTP Proxy: gateway.docker.internal:3128
HTTPS Proxy: gateway.docker.internal:3129
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
docker-compose version
(if applicable):(paste your output here)
Additional environment details (AWS, VirtualBox, Docker for MAC, physical, etc.):
bitnami / bitnami-docker-rabbitmq
Authenticate using LDAP with TLS enabled as decribed here: erlang/otp#5961
Authentication works
TLS Handshake failure due to an error in Erlang 24.3.4
No response
bitnami/sonarqube:9
Currently one hast to mount the conf file and edit it manually. Since other options of this file can already be configured, this would make life easier.
Add environment variables and do replacements accordingly.
The sonar.properties currently includes the following proxy settings:
# HTTP proxy (default none)
#http.proxyHost=
#http.proxyPort=
# HTTPS proxy (defaults are values of http.proxyHost and http.proxyPort)
#https.proxyHost=
#https.proxyPort=
# NT domain name if NTLM proxy is used
#http.auth.ntlm.domain=
# SOCKS proxy (default none)
#socksProxyHost=
#socksProxyPort=
# Proxy authentication (used for HTTP, HTTPS and SOCKS proxies)
#http.proxyUser=
#http.proxyPassword=
# Proxy exceptions: list of hosts that can be accessed without going through the proxy
# separated by the '|' character, wildcard character '*' can be used for pattern matching
# used for HTTP and HTTPS (default none)
# (note: localhost and its literal notations (127.0.0.1, ...) are always excluded)
http.nonProxyHosts=
bitnami/pgbouncer:1
Trying to login from a remote host.
So, for context, when I try to login to database directly, such as through:
'postgres://contra_api:SomePassword@[DATABASE_IP]:5432/contra'
it works, but when I try to login through pgbouncer, I am getting error about invalid credentials, i.e.
'postgres://contra_api:SomePassword@contra-pgbouncer:6432/contra'
The error is: "psql: FATAL: password authentication failed"
As far as I understand, the issue here is that we forward client_addr
, though I am not 100% confident.
Do not forward client's IP address.
No response
bitnami/apache:2.4.54-debian-11-r13
/opt/bitnami/apache/conf/vhosts/mods-enabled.conf
:
LoadModule mpm_worker_module modules/mod_mpm_worker.so
Desired module will load at runtime.
apache 12:12:06.73
apache 12:12:06.73 Welcome to the Bitnami apache container
apache 12:12:06.73 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-apache
apache 12:12:06.73 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-apache/issues
apache 12:12:06.73
apache 12:12:06.73 INFO ==> ** Starting Apache setup **
realpath: /bitnami/apache/conf: No such file or directory
apache 12:12:06.76 INFO ==> Configuring Apache ServerTokens directive
apache 12:12:06.76 INFO ==> ** Apache setup finished! **
apache 12:12:06.78 INFO ==> ** Starting Apache **
AH00534: httpd: Configuration error: More than one MPM loaded.
mpm_prefork
is loaded automatically. There is no way to disable as a2dismod
is not available.
mpm_worker
is prefered as a number of modules do not work with mpm_prefork
, eg mod_qos
.
docker.io/bitnami/cassandra:4.0.4
Run using docker-compose on 2 separate instances.
IP: 10.117.50.41
version: '2'
services:
cassandra:
image: "bitnami/cassandra:4.0.4"
ports:
- 7000:7000
- 7001:7001
- 9042:9042
environment:
- CASSANDRA_SEEDS=10.117.50.41
- CASSANDRA_CLUSTER_NAME=cassandra-cluster
- CASSANDRA_PASSWORD=cassandra
- CASSANDRA_BROADCAST_ADDRESS=10.117.50.41
- BITNAMI_DEBUG=true
# By default, Cassandra autodetects the available host memory and takes as much as it can.
# Therefore, memory options are mandatory if multiple Cassandras are launched in the same node.
- MAX_HEAP_SIZE=512M
- HEAP_NEWSIZE=256M
IP: 10.117.50.47
version: '2'
services:
cassandra:
image: "bitnami/cassandra:4.0.4"
ports:
- 7000:7000
- 7001:7001
- 9042:9042
environment:
- CASSANDRA_SEEDS=10.117.50.41
- CASSANDRA_CLUSTER_NAME=cassandra-cluster
- CASSANDRA_PASSWORD=cassandra
- CASSANDRA_BROADCAST_ADDRESS=10.117.50.47
- BITNAMI_DEBUG=true
# By default, Cassandra autodetects the available host memory and takes as much as it can.
# Therefore, memory options are mandatory if multiple Cassandras are launched in the same node.
- MAX_HEAP_SIZE=512M
- HEAP_NEWSIZE=256M
Connect between two nodes via handshake and gossip
If I switch the image to "cassandra:4.0.4" the default Cassandra image this example gossips accordingly
Bitnami crashes on this error:
cassandra 20:52:18.01 ERROR ==> Nodetool output
cassandra 20:52:18.00 ERROR ==> Cassandra failed to start up
BUT ITS EMPTY UNDER HERE
I am not creating any docker networks. Im allowing docker to assign the IP at docker-compose up
I have no name!@651bbdc85957:/$ cat /opt/bitnami/cassandra/logs/cassandra.log
INFO [main] 2022-07-06 20:50:41,140 YamlConfigurationLoader.java:97 - Configuration location: file:/opt/bitnami/cassandra/conf/cassandra.yaml
INFO [main] 2022-07-06 20:50:41,635 Config.java:706 - Node configuration:[allocate_tokens_for_keyspace=null; allocate_tokens_for_local_replication_factor=3; allow_extra_insecure_udfs=false; allow_insecure_udfs=false; audit_logging_options=AuditLogOptions{enabled=false, logger='BinAuditLogger', included_keyspaces='', excluded_keyspaces='system,system_schema,system_virtual_schema', included_categories='', excluded_categories='', included_users='', excluded_users='', audit_logs_dir='/opt/bitnami/cassandra/logs/audit/', archive_command='', roll_cycle='HOURLY', block=true, max_queue_weight=268435456, max_log_size=17179869184}; authenticator=PasswordAuthenticator; authorizer=CassandraAuthorizer; auto_bootstrap=true; auto_optimise_full_repair_streams=false; auto_optimise_inc_repair_streams=false; auto_optimise_preview_repair_streams=false; auto_snapshot=true; autocompaction_on_startup_enabled=true; automatic_sstable_upgrade=false; back_pressure_enabled=false; back_pressure_strategy=null; batch_size_fail_threshold_in_kb=50; batch_size_warn_threshold_in_kb=5; batchlog_replay_throttle_in_kb=1024; block_for_peers_in_remote_dcs=false; block_for_peers_timeout_in_secs=10; broadcast_address=10.117.50.41; broadcast_rpc_address=194c9047e439; buffer_pool_use_heap_if_exhausted=false; cache_load_timeout_seconds=30; cas_contention_timeout_in_ms=1000; cdc_enabled=false; cdc_free_space_check_interval_ms=250; cdc_raw_directory=/bitnami/cassandra/data/cdc_raw; cdc_total_space_in_mb=0; check_for_duplicate_rows_during_compaction=true; check_for_duplicate_rows_during_reads=true; client_encryption_options=<REDACTED>; cluster_name=cassandra-cluster; column_index_cache_size_in_kb=2; column_index_size_in_kb=64; commit_failure_policy=stop; commitlog_compression=null; commitlog_directory=/bitnami/cassandra/data/commitlog; commitlog_max_compression_buffers_in_pool=3; commitlog_periodic_queue_size=-1; commitlog_segment_size_in_mb=32; commitlog_sync=periodic; commitlog_sync_batch_window_in_ms=NaN; commitlog_sync_group_window_in_ms=NaN; commitlog_sync_period_in_ms=10000; commitlog_total_space_in_mb=null; compaction_large_partition_warning_threshold_mb=100; compaction_throughput_mb_per_sec=64; concurrent_compactors=null; concurrent_counter_writes=32; concurrent_materialized_view_builders=1; concurrent_materialized_view_writes=32; concurrent_reads=32; concurrent_replicates=null; concurrent_validations=0; concurrent_writes=32; consecutive_message_errors_threshold=1; corrupted_tombstone_strategy=disabled; counter_cache_keys_to_save=2147483647; counter_cache_save_period=7200; counter_cache_size_in_mb=null; counter_write_request_timeout_in_ms=5000; credentials_cache_max_entries=1000; credentials_update_interval_in_ms=-1; credentials_validity_in_ms=2000; cross_node_timeout=true; data_file_directories=[Ljava.lang.String;@3e27ba32; diagnostic_events_enabled=false; disk_access_mode=auto; disk_failure_policy=stop; disk_optimization_estimate_percentile=0.95; disk_optimization_page_cross_chance=0.1; disk_optimization_strategy=ssd; dynamic_snitch=true; dynamic_snitch_badness_threshold=1.0; dynamic_snitch_reset_interval_in_ms=600000; dynamic_snitch_update_interval_in_ms=100; enable_drop_compact_storage=false; enable_materialized_views=false; enable_sasi_indexes=false; enable_scripted_user_defined_functions=false; enable_transient_replication=false; enable_user_defined_functions=false; enable_user_defined_functions_threads=true; endpoint_snitch=SimpleSnitch; file_cache_enabled=false; file_cache_round_up=null; file_cache_size_in_mb=null; flush_compression=fast; force_new_prepared_statement_behaviour=false; full_query_logging_options=FullQueryLoggerOptions{log_dir='', archive_command='', roll_cycle='HOURLY', block=true, max_queue_weight=268435456, max_log_size=17179869184}; gc_log_threshold_in_ms=200; gc_warn_threshold_in_ms=1000; hinted_handoff_disabled_datacenters=[]; hinted_handoff_enabled=true; hinted_handoff_throttle_in_kb=1024; hints_compression=null; hints_directory=/bitnami/cassandra/data/hints; hints_flush_period_in_ms=10000; ideal_consistency_level=null; incremental_backups=false; index_summary_capacity_in_mb=null; index_summary_resize_interval_in_minutes=60; initial_range_tombstone_list_allocation_size=1; initial_token=null; inter_dc_stream_throughput_outbound_megabits_per_sec=200; inter_dc_tcp_nodelay=false; internode_application_receive_queue_capacity_in_bytes=4194304; internode_application_receive_queue_reserve_endpoint_capacity_in_bytes=134217728; internode_application_receive_queue_reserve_global_capacity_in_bytes=536870912; internode_application_send_queue_capacity_in_bytes=4194304; internode_application_send_queue_reserve_endpoint_capacity_in_bytes=134217728; internode_application_send_queue_reserve_global_capacity_in_bytes=536870912; internode_authenticator=null; internode_compression=dc; internode_max_message_size_in_bytes=null; internode_socket_receive_buffer_size_in_bytes=0; internode_socket_send_buffer_size_in_bytes=0; internode_streaming_tcp_user_timeout_in_ms=300000; internode_tcp_connect_timeout_in_ms=2000; internode_tcp_user_timeout_in_ms=30000; key_cache_keys_to_save=2147483647; key_cache_migrate_during_compaction=true; key_cache_save_period=14400; key_cache_size_in_mb=null; keyspace_count_warn_threshold=40; listen_address=194c9047e439; listen_interface=null; listen_interface_prefer_ipv6=false; listen_on_broadcast_address=false; local_system_data_file_directory=null; max_concurrent_automatic_sstable_upgrades=1; max_hint_window_in_ms=10800000; max_hints_delivery_threads=2; max_hints_file_size_in_mb=128; max_mutation_size_in_kb=null; max_streaming_retries=3; max_value_size_in_mb=256; memtable_allocation_type=heap_buffers; memtable_cleanup_threshold=null; memtable_flush_writers=0; memtable_heap_space_in_mb=null; memtable_offheap_space_in_mb=null; min_free_space_per_drive_in_mb=50; native_transport_allow_older_protocols=true; native_transport_flush_in_batches_legacy=false; native_transport_idle_timeout_in_ms=0; native_transport_max_concurrent_connections=-1; native_transport_max_concurrent_connections_per_ip=-1; native_transport_max_concurrent_requests_in_bytes=-1; native_transport_max_concurrent_requests_in_bytes_per_ip=-1; native_transport_max_frame_size_in_mb=256; native_transport_max_negotiable_protocol_version=null; native_transport_max_threads=128; native_transport_port=9042; native_transport_port_ssl=null; native_transport_receive_queue_capacity_in_bytes=1048576; network_authorizer=AllowAllNetworkAuthorizer; networking_cache_size_in_mb=null; num_tokens=256; otc_backlog_expiration_interval_ms=200; otc_coalescing_enough_coalesced_messages=8; otc_coalescing_strategy=DISABLED; otc_coalescing_window_us=200; partitioner=org.apache.cassandra.dht.Murmur3Partitioner; periodic_commitlog_sync_lag_block_in_ms=null; permissions_cache_max_entries=1000; permissions_update_interval_in_ms=-1; permissions_validity_in_ms=2000; phi_convict_threshold=8.0; prepared_statements_cache_size_mb=null; range_request_timeout_in_ms=10000; range_tombstone_list_growth_factor=1.5; read_request_timeout_in_ms=5000; reject_repair_compaction_threshold=2147483647; repair_command_pool_full_strategy=queue; repair_command_pool_size=0; repair_session_max_tree_depth=null; repair_session_space_in_mb=null; repaired_data_tracking_for_partition_reads_enabled=false; repaired_data_tracking_for_range_reads_enabled=false; report_unconfirmed_repaired_data_mismatches=false; request_timeout_in_ms=10000; role_manager=CassandraRoleManager; roles_cache_max_entries=1000; roles_update_interval_in_ms=-1; roles_validity_in_ms=2000; row_cache_class_name=org.apache.cassandra.cache.OHCProvider; row_cache_keys_to_save=2147483647; row_cache_save_period=0; row_cache_size_in_mb=0; rpc_address=0.0.0.0; rpc_interface=null; rpc_interface_prefer_ipv6=false; rpc_keepalive=true; saved_caches_directory=/bitnami/cassandra/data/saved_caches; seed_provider=org.apache.cassandra.locator.SimpleSeedProvider{seeds=10.117.50.41}; server_encryption_options=<REDACTED>; slow_query_log_timeout_in_ms=500; snapshot_before_compaction=false; snapshot_links_per_second=0; snapshot_on_duplicate_row_detection=false; snapshot_on_repaired_data_mismatch=false; ssl_storage_port=7001; sstable_preemptive_open_interval_in_mb=50; start_native_transport=true; storage_port=7000; stream_entire_sstables=true; stream_throughput_outbound_megabits_per_sec=200; streaming_connections_per_host=1; streaming_keep_alive_period_in_secs=300; table_count_warn_threshold=150; tombstone_failure_threshold=100000; tombstone_warn_threshold=1000; tracetype_query_ttl=86400; tracetype_repair_ttl=604800; transparent_data_encryption_options=org.apache.cassandra.config.TransparentDataEncryptionOptions@7ef82753; trickle_fsync=false; trickle_fsync_interval_in_kb=10240; truncate_request_timeout_in_ms=60000; unlogged_batch_across_partitions_warn_threshold=10; use_offheap_merkle_trees=true; user_defined_function_fail_timeout=1500; user_defined_function_warn_timeout=500; user_function_timeout_policy=die; validation_preview_purge_head_start_in_sec=3600; windows_timer_interval=1; write_request_timeout_in_ms=2000]
INFO [main] 2022-07-06 20:50:41,640 DatabaseDescriptor.java:416 - DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
INFO [main] 2022-07-06 20:50:41,641 DatabaseDescriptor.java:477 - Global memtable on-heap threshold is enabled at 121MB
INFO [main] 2022-07-06 20:50:41,641 DatabaseDescriptor.java:481 - Global memtable off-heap threshold is enabled at 121MB
WARN [main] 2022-07-06 20:50:41,658 DatabaseDescriptor.java:628 - Only 41.837GiB free across all data volumes. Consider adding more capacity to your cluster or removing obsolete snapshots
INFO [main] 2022-07-06 20:50:41,780 JMXServerUtils.java:271 - Configured JMX server at: service:jmx:rmi://127.0.0.1/jndi/rmi://127.0.0.1:7199/jmxrmi
INFO [main] 2022-07-06 20:50:41,786 CassandraDaemon.java:625 - Hostname: 194c9047e439:7000:7001
INFO [main] 2022-07-06 20:50:41,786 CassandraDaemon.java:632 - JVM vendor/version: OpenJDK 64-Bit Server VM/1.8.0_333
INFO [main] 2022-07-06 20:50:41,787 CassandraDaemon.java:633 - Heap size: 486.438MiB/486.438MiB
INFO [main] 2022-07-06 20:50:41,787 CassandraDaemon.java:638 - Code Cache Non-heap memory: init = 2555904(2496K) used = 4682304(4572K) committed = 4718592(4608K) max = 251658240(245760K)
INFO [main] 2022-07-06 20:50:41,787 CassandraDaemon.java:638 - Metaspace Non-heap memory: init = 0(0K) used = 22396928(21872K) committed = 22827008(22292K) max = -1(-1K)
INFO [main] 2022-07-06 20:50:41,788 CassandraDaemon.java:638 - Compressed Class Space Non-heap memory: init = 0(0K) used = 2661776(2599K) committed = 2830336(2764K) max = 1073741824(1048576K)
INFO [main] 2022-07-06 20:50:41,788 CassandraDaemon.java:638 - Par Eden Space Heap memory: init = 214827008(209792K) used = 111713072(109094K) committed = 214827008(209792K) max = 214827008(209792K)
INFO [main] 2022-07-06 20:50:41,788 CassandraDaemon.java:638 - Par Survivor Space Heap memory: init = 26804224(26176K) used = 0(0K) committed = 26804224(26176K) max = 26804224(26176K)
INFO [main] 2022-07-06 20:50:41,788 CassandraDaemon.java:638 - CMS Old Gen Heap memory: init = 268435456(262144K) used = 0(0K) committed = 268435456(262144K) max = 268435456(262144K)
INFO [main] 2022-07-06 20:50:41,788 CassandraDaemon.java:640 - Classpath: /opt/bitnami/cassandra/bin/../conf:/opt/bitnami/cassandra/bin/../lib/HdrHistogram-2.1.9.jar:/opt/bitnami/cassandra/bin/../lib/ST4-4.0.8.jar:/opt/bitnami/cassandra/bin/../lib/airline-0.8.jar:/opt/bitnami/cassandra/bin/../lib/antlr-runtime-3.5.2.jar:/opt/bitnami/cassandra/bin/../lib/apache-cassandra-4.0.4.jar:/opt/bitnami/cassandra/bin/../lib/asm-7.1.jar:/opt/bitnami/cassandra/bin/../lib/caffeine-2.5.6.jar:/opt/bitnami/cassandra/bin/../lib/cassandra-driver-core-3.11.0-shaded.jar:/opt/bitnami/cassandra/bin/../lib/chronicle-bytes-2.20.111.jar:/opt/bitnami/cassandra/bin/../lib/chronicle-core-2.20.126.jar:/opt/bitnami/cassandra/bin/../lib/chronicle-queue-5.20.123.jar:/opt/bitnami/cassandra/bin/../lib/chronicle-threads-2.20.111.jar:/opt/bitnami/cassandra/bin/../lib/chronicle-wire-2.20.117.jar:/opt/bitnami/cassandra/bin/../lib/commons-cli-1.1.jar:/opt/bitnami/cassandra/bin/../lib/commons-codec-1.9.jar:/opt/bitnami/cassandra/bin/../lib/commons-lang3-3.11.jar:/opt/bitnami/cassandra/bin/../lib/commons-math3-3.2.jar:/opt/bitnami/cassandra/bin/../lib/concurrent-trees-2.4.0.jar:/opt/bitnami/cassandra/bin/../lib/ecj-4.6.1.jar:/opt/bitnami/cassandra/bin/../lib/guava-27.0-jre.jar:/opt/bitnami/cassandra/bin/../lib/high-scale-lib-1.0.6.jar:/opt/bitnami/cassandra/bin/../lib/hppc-0.8.1.jar:/opt/bitnami/cassandra/bin/../lib/j2objc-annotations-1.3.jar:/opt/bitnami/cassandra/bin/../lib/jackson-annotations-2.13.2.jar:/opt/bitnami/cassandra/bin/../lib/jackson-core-2.13.2.jar:/opt/bitnami/cassandra/bin/../lib/jackson-databind-2.13.2.2.jar:/opt/bitnami/cassandra/bin/../lib/jamm-0.3.2.jar:/opt/bitnami/cassandra/bin/../lib/java-cup-runtime-11b-20160615.jar:/opt/bitnami/cassandra/bin/../lib/javax.inject-1.jar:/opt/bitnami/cassandra/bin/../lib/jbcrypt-0.4.jar:/opt/bitnami/cassandra/bin/../lib/jcl-over-slf4j-1.7.25.jar:/opt/bitnami/cassandra/bin/../lib/jcommander-1.30.jar:/opt/bitnami/cassandra/bin/../lib/jctools-core-3.1.0.jar:/opt/bitnami/cassandra/bin/../lib/jflex-1.8.2.jar:/opt/bitnami/cassandra/bin/../lib/jna-5.6.0.jar:/opt/bitnami/cassandra/bin/../lib/json-simple-1.1.jar:/opt/bitnami/cassandra/bin/../lib/jvm-attach-api-1.5.jar:/opt/bitnami/cassandra/bin/../lib/log4j-over-slf4j-1.7.25.jar:/opt/bitnami/cassandra/bin/../lib/logback-classic-1.2.9.jar:/opt/bitnami/cassandra/bin/../lib/logback-core-1.2.9.jar:/opt/bitnami/cassandra/bin/../lib/lz4-java-1.8.0.jar:/opt/bitnami/cassandra/bin/../lib/metrics-core-3.1.5.jar:/opt/bitnami/cassandra/bin/../lib/metrics-jvm-3.1.5.jar:/opt/bitnami/cassandra/bin/../lib/metrics-logback-3.1.5.jar:/opt/bitnami/cassandra/bin/../lib/mxdump-0.14.jar:/opt/bitnami/cassandra/bin/../lib/netty-all-4.1.58.Final.jar:/opt/bitnami/cassandra/bin/../lib/netty-tcnative-boringssl-static-2.0.36.Final.jar:/opt/bitnami/cassandra/bin/../lib/ohc-core-0.5.1.jar:/opt/bitnami/cassandra/bin/../lib/ohc-core-j8-0.5.1.jar:/opt/bitnami/cassandra/bin/../lib/psjava-0.1.19.jar:/opt/bitnami/cassandra/bin/../lib/reporter-config-base-3.0.3.jar:/opt/bitnami/cassandra/bin/../lib/reporter-config3-3.0.3.jar:/opt/bitnami/cassandra/bin/../lib/sigar-1.6.4.jar:/opt/bitnami/cassandra/bin/../lib/sjk-cli-0.14.jar:/opt/bitnami/cassandra/bin/../lib/sjk-core-0.14.jar:/opt/bitnami/cassandra/bin/../lib/sjk-json-0.14.jar:/opt/bitnami/cassandra/bin/../lib/sjk-stacktrace-0.14.jar:/opt/bitnami/cassandra/bin/../lib/slf4j-api-1.7.25.jar:/opt/bitnami/cassandra/bin/../lib/snakeyaml-1.26.jar:/opt/bitnami/cassandra/bin/../lib/snappy-java-1.1.2.6.jar:/opt/bitnami/cassandra/bin/../lib/snowball-stemmer-1.3.0.581.1.jar:/opt/bitnami/cassandra/bin/../lib/stream-2.5.2.jar:/opt/bitnami/cassandra/bin/../lib/zstd-jni-1.5.0-4.jar:/opt/bitnami/cassandra/bin/../lib/jsr223/*/*.jar::/opt/bitnami/cassandra/bin/../lib/jamm-0.3.2.jar
INFO [main] 2022-07-06 20:50:41,790 CassandraDaemon.java:642 - JVM Arguments: [-ea, -da:net.openhft..., -XX:+UseThreadPriorities, -XX:+HeapDumpOnOutOfMemoryError, -Xss256k, -XX:+AlwaysPreTouch, -XX:-UseBiasedLocking, -XX:+UseTLAB, -XX:+ResizeTLAB, -XX:+UseNUMA, -XX:+PerfDisableSharedMem, -Djava.net.preferIPv4Stack=true, -XX:ThreadPriorityPolicy=42, -XX:+UseParNewGC, -XX:+UseConcMarkSweepGC, -XX:+CMSParallelRemarkEnabled, -XX:SurvivorRatio=8, -XX:MaxTenuringThreshold=1, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:CMSWaitDuration=10000, -XX:+CMSParallelInitialMarkEnabled, -XX:+CMSEdenChunksRecordAlways, -XX:+CMSClassUnloadingEnabled, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintHeapAtGC, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -XX:+PrintPromotionFailure, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=10, -XX:GCLogFileSize=10M, -Xloggc:/opt/bitnami/cassandra/logs/gc.log, -Xms512M, -Xmx512M, -Xmn256M, -XX:+UseCondCardMark, -XX:CompileCommandFile=/opt/bitnami/cassandra/bin/../conf/hotspot_compiler, -javaagent:/opt/bitnami/cassandra/bin/../lib/jamm-0.3.2.jar, -Dcassandra.jmx.local.port=7199, -Dcom.sun.management.jmxremote.authenticate=false, -Dcom.sun.management.jmxremote.password.file=/etc/cassandra/jmxremote.password, -Djava.library.path=/opt/bitnami/cassandra/bin/../lib/sigar-bin, -Dcassandra.libjemalloc=/usr/lib/x86_64-linux-gnu/libjemalloc.so.2, -XX:OnOutOfMemoryError=kill -9 %p, -Dlogback.configurationFile=logback.xml, -Dcassandra.logdir=/opt/bitnami/cassandra/logs, -Dcassandra.storagedir=/opt/bitnami/cassandra/bin/../data, -Dcassandra-pidfile=/opt/bitnami/cassandra/tmp/cassandra.pid, -Dcassandra-foreground=yes]
WARN [main] 2022-07-06 20:50:41,898 NativeLibrary.java:201 - Unable to lock JVM memory (ENOMEM). This can result in part of the JVM being swapped out, especially with mmapped I/O enabled. Increase RLIMIT_MEMLOCK.
INFO [main] 2022-07-06 20:50:41,959 MonotonicClock.java:202 - Scheduling approximate time conversion task with an interval of 10000 milliseconds
INFO [main] 2022-07-06 20:50:41,963 MonotonicClock.java:338 - Scheduling approximate time-check task with a precision of 2 milliseconds
INFO [main] 2022-07-06 20:50:41,965 StartupChecks.java:147 - jemalloc seems to be preloaded from /usr/lib/x86_64-linux-gnu/libjemalloc.so.2
WARN [main] 2022-07-06 20:50:41,966 StartupChecks.java:187 - JMX is not enabled to receive remote connections. Please see cassandra-env.sh for more info.
INFO [main] 2022-07-06 20:50:41,968 SigarLibrary.java:44 - Initializing SIGAR library
INFO [main] 2022-07-06 20:50:41,979 SigarLibrary.java:180 - Checked OS settings and found them configured for optimal performance.
WARN [main] 2022-07-06 20:50:41,980 StartupChecks.java:329 - Maximum number of memory map areas per process (vm.max_map_count) 65530 is too low, recommended value: 1048575, you can change it with sysctl.
INFO [main] 2022-07-06 20:50:42,587 Keyspace.java:386 - Creating replication strategy system params KeyspaceParams{durable_writes=true, replication=ReplicationParams{class=org.apache.cassandra.locator.LocalStrategy}}
INFO [main] 2022-07-06 20:50:42,636 ColumnFamilyStore.java:385 - Initializing system.IndexInfo
INFO [main] 2022-07-06 20:50:44,038 ColumnFamilyStore.java:385 - Initializing system.batches
INFO [main] 2022-07-06 20:50:44,049 ColumnFamilyStore.java:385 - Initializing system.paxos
INFO [main] 2022-07-06 20:50:44,063 ColumnFamilyStore.java:385 - Initializing system.local
INFO [main] 2022-07-06 20:50:44,085 ColumnFamilyStore.java:385 - Initializing system.peers_v2
INFO [main] 2022-07-06 20:50:44,099 ColumnFamilyStore.java:385 - Initializing system.peers
INFO [main] 2022-07-06 20:50:44,118 ColumnFamilyStore.java:385 - Initializing system.peer_events_v2
INFO [main] 2022-07-06 20:50:44,134 ColumnFamilyStore.java:385 - Initializing system.peer_events
INFO [main] 2022-07-06 20:50:44,141 ColumnFamilyStore.java:385 - Initializing system.compaction_history
INFO [main] 2022-07-06 20:50:44,155 ColumnFamilyStore.java:385 - Initializing system.sstable_activity
INFO [main] 2022-07-06 20:50:44,164 ColumnFamilyStore.java:385 - Initializing system.size_estimates
INFO [main] 2022-07-06 20:50:44,173 ColumnFamilyStore.java:385 - Initializing system.table_estimates
INFO [main] 2022-07-06 20:50:44,181 ColumnFamilyStore.java:385 - Initializing system.available_ranges_v2
INFO [main] 2022-07-06 20:50:44,189 ColumnFamilyStore.java:385 - Initializing system.available_ranges
INFO [main] 2022-07-06 20:50:44,197 ColumnFamilyStore.java:385 - Initializing system.transferred_ranges_v2
INFO [main] 2022-07-06 20:50:44,203 ColumnFamilyStore.java:385 - Initializing system.transferred_ranges
INFO [main] 2022-07-06 20:50:44,210 ColumnFamilyStore.java:385 - Initializing system.view_builds_in_progress
INFO [main] 2022-07-06 20:50:44,218 ColumnFamilyStore.java:385 - Initializing system.built_views
INFO [main] 2022-07-06 20:50:44,224 ColumnFamilyStore.java:385 - Initializing system.prepared_statements
INFO [main] 2022-07-06 20:50:44,233 ColumnFamilyStore.java:385 - Initializing system.repairs
INFO [main] 2022-07-06 20:50:44,461 QueryProcessor.java:115 - Initialized prepared statement caches with 10 MB
INFO [main] 2022-07-06 20:50:44,619 Keyspace.java:386 - Creating replication strategy system_schema params KeyspaceParams{durable_writes=true, replication=ReplicationParams{class=org.apache.cassandra.locator.LocalStrategy}}
INFO [main] 2022-07-06 20:50:44,622 ColumnFamilyStore.java:385 - Initializing system_schema.keyspaces
INFO [main] 2022-07-06 20:50:44,628 ColumnFamilyStore.java:385 - Initializing system_schema.tables
INFO [main] 2022-07-06 20:50:44,634 ColumnFamilyStore.java:385 - Initializing system_schema.columns
INFO [main] 2022-07-06 20:50:44,639 ColumnFamilyStore.java:385 - Initializing system_schema.triggers
INFO [main] 2022-07-06 20:50:44,646 ColumnFamilyStore.java:385 - Initializing system_schema.dropped_columns
INFO [main] 2022-07-06 20:50:44,652 ColumnFamilyStore.java:385 - Initializing system_schema.views
INFO [main] 2022-07-06 20:50:44,658 ColumnFamilyStore.java:385 - Initializing system_schema.types
INFO [main] 2022-07-06 20:50:44,663 ColumnFamilyStore.java:385 - Initializing system_schema.functions
INFO [main] 2022-07-06 20:50:44,669 ColumnFamilyStore.java:385 - Initializing system_schema.aggregates
INFO [main] 2022-07-06 20:50:44,675 ColumnFamilyStore.java:385 - Initializing system_schema.indexes
INFO [main] 2022-07-06 20:50:44,765 SystemKeyspaceMigrator40.java:76 - system.peers_v2 table was empty, migrating legacy system.peers, if this fails you should fix the issue and then truncate system.peers_v2 to have it try again.
INFO [main] 2022-07-06 20:50:44,775 SystemKeyspaceMigrator40.java:100 - Migrating rows from legacy system.peers to system.peers_v2
INFO [main] 2022-07-06 20:50:44,781 SystemKeyspaceMigrator40.java:119 - Migrated 0 rows from legacy system.peers to system.peers_v2
INFO [main] 2022-07-06 20:50:44,781 SystemKeyspaceMigrator40.java:129 - system.peer_events_v2 table was empty, migrating legacy system.peer_events to system.peer_events_v2
INFO [main] 2022-07-06 20:50:44,782 SystemKeyspaceMigrator40.java:152 - Migrated 0 rows from legacy system.peer_events to system.peer_events_v2
INFO [main] 2022-07-06 20:50:44,783 SystemKeyspaceMigrator40.java:162 - system.transferred_ranges_v2 table was empty, migrating legacy system.transferred_ranges to system.transferred_ranges_v2
INFO [main] 2022-07-06 20:50:44,785 SystemKeyspaceMigrator40.java:190 - Migrated 0 rows from legacy system.transferred_ranges to system.transferred_ranges_v2
INFO [main] 2022-07-06 20:50:44,786 SystemKeyspaceMigrator40.java:200 - system.available_ranges_v2 table was empty, migrating legacy system.available_ranges to system.available_ranges_v2
INFO [main] 2022-07-06 20:50:44,786 SystemKeyspaceMigrator40.java:226 - Migrated 0 rows from legacy system.available_ranges to system.available_ranges_v2
INFO [main] 2022-07-06 20:50:44,787 StorageService.java:838 - Populating token metadata from system tables
INFO [main] 2022-07-06 20:50:44,821 StorageService.java:832 - Token metadata:
INFO [main] 2022-07-06 20:50:44,871 CacheService.java:103 - Initializing key cache with capacity of 24 MBs.
INFO [main] 2022-07-06 20:50:44,886 CacheService.java:125 - Initializing row cache with capacity of 0 MBs
INFO [main] 2022-07-06 20:50:44,888 CacheService.java:154 - Initializing counter cache with capacity of 12 MBs
INFO [main] 2022-07-06 20:50:44,889 CacheService.java:165 - Scheduling counter cache save to every 7200 seconds (going to save all keys).
INFO [main] 2022-07-06 20:50:44,923 CommitLog.java:168 - No commitlog files found; skipping replay
INFO [main] 2022-07-06 20:50:44,923 StorageService.java:838 - Populating token metadata from system tables
INFO [main] 2022-07-06 20:50:44,954 StorageService.java:832 - Token metadata:
INFO [main] 2022-07-06 20:50:45,115 ColumnFamilyStore.java:2252 - Truncating system.size_estimates
INFO [main] 2022-07-06 20:50:45,156 ColumnFamilyStore.java:2289 - Truncating system.size_estimates with truncatedAt=1657140645129
INFO [main] 2022-07-06 20:50:45,203 ColumnFamilyStore.java:878 - Enqueuing flush of local: 2.213KiB (0%) on-heap, 0.000KiB (0%) off-heap
INFO [PerDiskMemtableFlushWriter_0:1] 2022-07-06 20:50:45,295 Memtable.java:469 - Writing Memtable-local@1822595959(0.434KiB serialized bytes, 3 ops, 0%/0% of on/off-heap limit), flushed range = (null, null]
INFO [PerDiskMemtableFlushWriter_0:1] 2022-07-06 20:50:45,298 Memtable.java:498 - Completed flushing /bitnami/cassandra/data/data/system/local-7ad54392bcdd35a684174e047860b377/nb-1-big-Data.db (0.229KiB) for commitlog position CommitLogPosition(segmentId=1657140641952, position=31212)
INFO [MemtableFlushWriter:2] 2022-07-06 20:50:45,372 LogTransaction.java:240 - Unfinished transaction log, deleting /bitnami/cassandra/data/data/system/local-7ad54392bcdd35a684174e047860b377/nb_txn_flush_4e82cba0-fd6d-11ec-9792-3f8ce372cd42.log
INFO [main] 2022-07-06 20:50:45,387 ColumnFamilyStore.java:2316 - Truncate of system.size_estimates is complete
INFO [main] 2022-07-06 20:50:45,387 ColumnFamilyStore.java:2252 - Truncating system.table_estimates
INFO [main] 2022-07-06 20:50:45,388 ColumnFamilyStore.java:2289 - Truncating system.table_estimates with truncatedAt=1657140645388
INFO [main] 2022-07-06 20:50:45,390 ColumnFamilyStore.java:878 - Enqueuing flush of local: 0.575KiB (0%) on-heap, 0.000KiB (0%) off-heap
INFO [PerDiskMemtableFlushWriter_0:2] 2022-07-06 20:50:45,401 Memtable.java:469 - Writing Memtable-local@991622006(0.091KiB serialized bytes, 1 ops, 0%/0% of on/off-heap limit), flushed range = (null, null]
INFO [PerDiskMemtableFlushWriter_0:2] 2022-07-06 20:50:45,402 Memtable.java:498 - Completed flushing /bitnami/cassandra/data/data/system/local-7ad54392bcdd35a684174e047860b377/nb-2-big-Data.db (0.063KiB) for commitlog position CommitLogPosition(segmentId=1657140641952, position=31323)
INFO [MemtableFlushWriter:2] 2022-07-06 20:50:45,425 LogTransaction.java:240 - Unfinished transaction log, deleting /bitnami/cassandra/data/data/system/local-7ad54392bcdd35a684174e047860b377/nb_txn_flush_4e9edf20-fd6d-11ec-9792-3f8ce372cd42.log
INFO [main] 2022-07-06 20:50:45,427 ColumnFamilyStore.java:2316 - Truncate of system.table_estimates is complete
INFO [main] 2022-07-06 20:50:45,433 QueryProcessor.java:167 - Preloaded 0 prepared statements
INFO [main] 2022-07-06 20:50:45,434 StorageService.java:736 - Cassandra version: 4.0.4
INFO [main] 2022-07-06 20:50:45,434 StorageService.java:737 - CQL version: 3.4.5
INFO [main] 2022-07-06 20:50:45,434 StorageService.java:738 - Native protocol supported versions: 3/v3, 4/v4, 5/v5, 6/v6-beta (default: 5/v5)
INFO [main] 2022-07-06 20:50:45,482 IndexSummaryManager.java:84 - Initializing index summary manager with a memory pool size of 24 MB and a resize interval of 60 minutes
INFO [main] 2022-07-06 20:50:45,484 StorageService.java:755 - Loading persisted ring state
INFO [main] 2022-07-06 20:50:45,484 StorageService.java:838 - Populating token metadata from system tables
INFO [main] 2022-07-06 20:50:45,515 BufferPools.java:49 - Global buffer pool limit is 121.000MiB for chunk-cache and 30.000MiB for networking
INFO [main] 2022-07-06 20:50:45,720 InboundConnectionInitiator.java:127 - Listening on address: (194c9047e439/192.168.0.2:7000), nic: eth0, encryption: unencrypted
WARN [main] 2022-07-06 20:50:45,798 SystemKeyspace.java:1130 - No host ID found, created 45797ec1-493f-4836-9d6d-05f196a8b074 (Note: This should happen exactly once per node).
INFO [main] 2022-07-06 20:50:45,828 StorageService.java:653 - Unable to gossip with any peers but continuing anyway since node is in its own seed list
INFO [main] 2022-07-06 20:50:45,868 StorageService.java:960 - Starting up server gossip
INFO [main] 2022-07-06 20:50:45,872 ColumnFamilyStore.java:878 - Enqueuing flush of local: 0.583KiB (0%) on-heap, 0.000KiB (0%) off-heap
INFO [PerDiskMemtableFlushWriter_0:1] 2022-07-06 20:50:45,883 Memtable.java:469 - Writing Memtable-local@1212460448(0.079KiB serialized bytes, 2 ops, 0%/0% of on/off-heap limit), flushed range = (null, null]
INFO [PerDiskMemtableFlushWriter_0:1] 2022-07-06 20:50:45,884 Memtable.java:498 - Completed flushing /bitnami/cassandra/data/data/system/local-7ad54392bcdd35a684174e047860b377/nb-3-big-Data.db (0.048KiB) for commitlog position CommitLogPosition(segmentId=1657140641952, position=31487)
INFO [MemtableFlushWriter:1] 2022-07-06 20:50:45,901 LogTransaction.java:240 - Unfinished transaction log, deleting /bitnami/cassandra/data/data/system/local-7ad54392bcdd35a684174e047860b377/nb_txn_flush_4ee7f610-fd6d-11ec-9792-3f8ce372cd42.log
INFO [main] 2022-07-06 20:50:46,010 StorageService.java:1050 - This node will not auto bootstrap because it is configured to be a seed node.
INFO [main] 2022-07-06 20:51:16,022 Gossiper.java:2214 - Waiting for gossip to settle...
INFO [main] 2022-07-06 20:51:24,023 Gossiper.java:2245 - No gossip backlog; proceeding
INFO [main] 2022-07-06 20:51:24,024 Gossiper.java:2214 - Waiting for gossip to settle...
INFO [main] 2022-07-06 20:51:32,025 Gossiper.java:2245 - No gossip backlog; proceeding
INFO [main] 2022-07-06 20:51:32,031 NetworkTopologyStrategy.java:88 - Configured datacenter replicas are datacenter1:rf(3)
INFO [main] 2022-07-06 20:51:32,033 TokenAllocatorFactory.java:44 - Using ReplicationAwareTokenAllocator.
INFO [main] 2022-07-06 20:51:32,130 TokenAllocation.java:106 - Selected tokens [-7647052918464736098, 3561354590263346818, -1248995368978552615, 7908646773860740679, -3846164117191709987, 1625680832079972802, 6073123436505962869, -5436168669705608625, -8726425725912337756, 529638931845034513, -2227140821494666690, 5001622820165930056, -6403532912784144124, 2716977942772185061, 7090974841248389884, 8984402655086320218, -137130302923822609, -2938502086619755859, -4482686852690421064, 4401889539786632810, -6887249975736480972, -551755222656404633, 1175568064220984641, 2290012869853841930, -8070412281946332441, 8569250909408979841, 5661349157825897225, 6675985486803066619, -1620459279827994168, -5859069465072800765, -4903156997842731225, -3277804778862317279, 3229483484977027839, 4071066524652027827, 7551755815630638866, -7200662428165701792, -9055289651956595395, -2541456665691111158, -845350552519933962, 248391753694119328, 2035731907305357913, 8321955355651926315, 5409147222232299886, -8333650804902988292, 895370791009399207, -4120439099805810361, -1852895394012533975, 6406814765530453069, 4764553280209784955, -3499104082560258413, -6073458502071939030, -5133082745033874949, 3030384481813794794, 3879254990345229399, -6619341282120896206, 7359311411625705557, 1453206942463795899, -7373978183703199819, 2557846694287825076, -7808344692655605969, -5619894423172608747, -4640387723777981149, 8821293076292049883, 6926468067410785846, -297542025163810485, 8150801549272546776, 5903243076969554301, 1878579203089753461, 5242235015175190033, -9207045825386550590, -996522944076056355, -2708505839781843195, -8507912725946795284, 78051985104655656, -2019446035720628531, -1401644128619677156, 758471015639977625, 4629268894318298324, -4261590906538637289, 7762461098808490784, -3647662851758459856, -3082375498185706916, 6278813255054359155, 3414233721519616554, 4254193460305402162, -6212312430284921574, -8871358773339837998, 3757359187230992164, -2367620462150266215, -7020174969005223218, 2902720536466211793, -5247740917254318023, -676950978068285668, 422302653550406321, 1063076569498879792, 1347963579292325967, -3967908418517000574, -7482855620845593241, 6565638057793318559, 7249505141817841268, -6728691412644586442, 2445427051615623868, -8178706195634841421, -4744028094986657358, -7919200858117347271, 9125749745320221959, 2189664530524218445, -397018733195485354, 1776998278954002311, -1104980670977855562, 5555919831711164786, 8725806027194991825, 6817593455742664423, 8458729560566219631, 8047317575713468991, 5811262352123075140, 5139999861827857856, -5719163010103531454, 4907826915110647166, -1717286670880990110, -2805273232046070576, -4994398823649768194, 668841537146473662, 4542834963437130380, -3367147017129843858, -4351998572110788918, -1494381461254446571, -8596781174241735641, -6484294991420027028, -8277163862558248, -5954510300969190825, 7679119303816381830, -2102511882922324836, 6199442535360749967, 3151510398212022441, -3727034841129812845, 3674332292775898336, -3158776078398706964, 7468437566064816111, 3987048086305068914, -6287599169847890138, -5318822528395814621, 2823987884636593975, 3338948251882297297, -8951146195095385252, -5516764094940806933, 4181690210993849806, -7095757673959583239, -8403840268442980488, 356796000008573354, -2438398338372175843, -7275387341708566719, 1552865137937687697, 1274105309638436499, 8254854836508415220, 173377934646558571, 6000521235472181524, -749878683165400164, 991821488140147327, -2606660682406397341, 5335085022288193998, -1926303997182559248, 7028413438575580657, -7549702916695981983, 8912505515086235689, -7712757083114006997, -207985653015943076, 2647921995922370139, -4804648690098932957, 6498784389359291632, -6787876671905933328, 7185455467395563048, -4546066049957912515, 1968670063449878651, 8661818102128967140, 2386935669184058467, -8245469087066444892, -455660298352129017, 2121047326304445412, -1314122640553858979, -4028471348491151908, -9119880544820196970, 1710053296815646327, -7984258656504243483, -911368831603783819, 5751006274668103057, -3557433084999792106, 4339203809420206075, 3502460434095531630, 5490675130189327231, 7848504331900473511, -8790234431717454991, -1161710452674607462, -2998159621200026784, 4852557314901324663, 6760684800593489820, 9069663894231908173, -4178799515781024829, 4483444573672849648, -2287126890559986774, -5773912261462374074, 608740631065784552, -6133998643716253897, -5046847096768831701, 7992454335321821303, 5087629563479052318, 834510887294549929, 8399564217622692181, -1772074371686282992, 4713843316247558003, -6541329387101109765, -2863474141009361183, -6945725805421827589, -3418047684042273953, -4403877830197135913, -8650589757810773864, -61263846508553744, 6357534940642175229, 2978457394223371562, 7627743002405883204, 6148610524036376398, -1547225070165933826, -607729061612776834, -2151802685928826954, 3831266740620341629, -3899977546001836009, 3101914004016147392, -3774133923650676716, -3204858365765318082, -6001267928758508391, -5368295626900534717, -6338891955869318687, -5181355810171745244, 9193190580349011720, 3631910244300663192, 1131107242827278850, 2508596531918700643, -7853719437028290794, 4139695928663905226, 8521210062588831631, 6629527976020273988, 7315929493734870754, 3290260410321240267, -6666111533960247484, 7423440014931961394, -7419350969666251771, 6879292340475019865, -1042639219655234391, 312997166711966244, -8117474548154321035, 3941846865366221478, 483525661819137277, 2780452867104287389, 5617006344861806782, 1408588101267723107, -7136083531527137696]
INFO [MigrationStage:1] 2022-07-06 20:51:32,172 ColumnFamilyStore.java:878 - Enqueuing flush of columns: 200.121KiB (0%) on-heap, 0.000KiB (0%) off-heap
INFO [PerDiskMemtableFlushWriter_0:2] 2022-07-06 20:51:32,181 Memtable.java:469 - Writing Memtable-columns@903678565(41.994KiB serialized bytes, 277 ops, 0%/0% of on/off-heap limit), flushed range = (null, null]
INFO [PerDiskMemtableFlushWriter_0:2] 2022-07-06 20:51:32,201 Memtable.java:498 - Completed flushing /bitnami/cassandra/data/data/system_schema/columns-24101c25a2ae3af787c1b40ee1aca33f/nb-1-big-Data.db (20.847KiB) for commitlog position CommitLogPosition(segmentId=1657140641952, position=42113)
INFO [MemtableFlushWriter:2] 2022-07-06 20:51:32,218 LogTransaction.java:240 - Unfinished transaction log, deleting /bitnami/cassandra/data/data/system_schema/columns-24101c25a2ae3af787c1b40ee1aca33f/nb_txn_flush_6a80eee0-fd6d-11ec-9792-3f8ce372cd42.log
INFO [MigrationStage:1] 2022-07-06 20:51:32,221 ColumnFamilyStore.java:878 - Enqueuing flush of dropped_columns: 1.079KiB (0%) on-heap, 0.000KiB (0%) off-heap
INFO [PerDiskMemtableFlushWriter_0:1] 2022-07-06 20:51:32,231 Memtable.java:469 - Writing Memtable-dropped_columns@945723512(0.120KiB serialized bytes, 3 ops, 0%/0% of on/off-heap limit), flushed range = (null, null]
INFO [PerDiskMemtableFlushWriter_0:1] 2022-07-06 20:51:32,232 Memtable.java:498 - Completed flushing /bitnami/cassandra/data/data/system_schema/dropped_columns-5e7583b5f3f43af19a39b7e1d6f5f11f/nb-1-big-Data.db (0.097KiB) for commitlog position CommitLogPosition(segmentId=1657140641952, position=42121)
INFO [MemtableFlushWriter:1] 2022-07-06 20:51:32,258 LogTransaction.java:240 - Unfinished transaction log, deleting /bitnami/cassandra/data/data/system_schema/dropped_columns-5e7583b5f3f43af19a39b7e1d6f5f11f/nb_txn_flush_6a8868f0-fd6d-11ec-9792-3f8ce372cd42.log
INFO [MigrationStage:1] 2022-07-06 20:51:32,260 ColumnFamilyStore.java:878 - Enqueuing flush of triggers: 0.511KiB (0%) on-heap, 0.000KiB (0%) off-heap
INFO [PerDiskMemtableFlushWriter_0:2] 2022-07-06 20:51:32,265 Memtable.java:469 - Writing Memtable-triggers@836933638(0.016KiB serialized bytes, 2 ops, 0%/0% of on/off-heap limit), flushed range = (null, null]
INFO [PerDiskMemtableFlushWriter_0:2] 2022-07-06 20:51:32,266 Memtable.java:498 - Completed flushing /bitnami/cassandra/data/data/system_schema/triggers-4df70b666b05325195a132b54005fd48/nb-1-big-Data.db (0.048KiB) for commitlog position CommitLogPosition(segmentId=1657140641952, position=42121)
INFO [MemtableFlushWriter:2] 2022-07-06 20:51:32,289 LogTransaction.java:240 - Unfinished transaction log, deleting /bitnami/cassandra/data/data/system_schema/triggers-4df70b666b05325195a132b54005fd48/nb_txn_flush_6a8e3550-fd6d-11ec-9792-3f8ce372cd42.log
INFO [MigrationStage:1] 2022-07-06 20:51:32,294 ColumnFamilyStore.java:878 - Enqueuing flush of types: 0.511KiB (0%) on-heap, 0.000KiB (0%) off-heap
INFO [PerDiskMemtableFlushWriter_0:1] 2022-07-06 20:51:32,302 Memtable.java:469 - Writing Memtable-types@1194930721(0.016KiB serialized bytes, 2 ops, 0%/0% of on/off-heap limit), flushed range = (null, null]
INFO [PerDiskMemtableFlushWriter_0:1] 2022-07-06 20:51:32,303 Memtable.java:498 - Completed flushing /bitnami/cassandra/data/data/system_schema/types-5a8b1ca866023f77a0459273d308917a/nb-1-big-Data.db (0.048KiB) for commitlog position CommitLogPosition(segmentId=1657140641952, position=42121)
INFO [MemtableFlushWriter:1] 2022-07-06 20:51:32,322 LogTransaction.java:240 - Unfinished transaction log, deleting /bitnami/cassandra/data/data/system_schema/types-5a8b1ca866023f77a0459273d308917a/nb_txn_flush_6a936570-fd6d-11ec-9792-3f8ce372cd42.log
INFO [MigrationStage:1] 2022-07-06 20:51:32,325 ColumnFamilyStore.java:878 - Enqueuing flush of functions: 0.511KiB (0%) on-heap, 0.000KiB (0%) off-heap
INFO [PerDiskMemtableFlushWriter_0:2] 2022-07-06 20:51:32,331 Memtable.java:469 - Writing Memtable-functions@1960837214(0.016KiB serialized bytes, 2 ops, 0%/0% of on/off-heap limit), flushed range = (null, null]
INFO [PerDiskMemtableFlushWriter_0:2] 2022-07-06 20:51:32,332 Memtable.java:498 - Completed flushing /bitnami/cassandra/data/data/system_schema/functions-96489b7980be3e14a70166a0b9159450/nb-1-big-Data.db (0.048KiB) for commitlog position CommitLogPosition(segmentId=1657140641952, position=42121)
INFO [MemtableFlushWriter:2] 2022-07-06 20:51:32,352 LogTransaction.java:240 - Unfinished transaction log, deleting /bitnami/cassandra/data/data/system_schema/functions-96489b7980be3e14a70166a0b9159450/nb_txn_flush_6a982060-fd6d-11ec-9792-3f8ce372cd42.log
INFO [MigrationStage:1] 2022-07-06 20:51:32,355 ColumnFamilyStore.java:878 - Enqueuing flush of aggregates: 0.511KiB (0%) on-heap, 0.000KiB (0%) off-heap
INFO [PerDiskMemtableFlushWriter_0:1] 2022-07-06 20:51:32,363 Memtable.java:469 - Writing Memtable-aggregates@1393683104(0.016KiB serialized bytes, 2 ops, 0%/0% of on/off-heap limit), flushed range = (null, null]
INFO [PerDiskMemtableFlushWriter_0:1] 2022-07-06 20:51:32,363 Memtable.java:498 - Completed flushing /bitnami/cassandra/data/data/system_schema/aggregates-924c55872e3a345bb10c12f37c1ba895/nb-1-big-Data.db (0.048KiB) for commitlog position CommitLogPosition(segmentId=1657140641952, position=42121)
INFO [MemtableFlushWriter:1] 2022-07-06 20:51:32,383 LogTransaction.java:240 - Unfinished transaction log, deleting /bitnami/cassandra/data/data/system_schema/aggregates-924c55872e3a345bb10c12f37c1ba895/nb_txn_flush_6a9cdb50-fd6d-11ec-9792-3f8ce372cd42.log
INFO [MigrationStage:1] 2022-07-06 20:51:32,386 ColumnFamilyStore.java:878 - Enqueuing flush of indexes: 0.511KiB (0%) on-heap, 0.000KiB (0%) off-heap
INFO [PerDiskMemtableFlushWriter_0:2] 2022-07-06 20:51:32,392 Memtable.java:469 - Writing Memtable-indexes@411950327(0.016KiB serialized bytes, 2 ops, 0%/0% of on/off-heap limit), flushed range = (null, null]
INFO [PerDiskMemtableFlushWriter_0:2] 2022-07-06 20:51:32,392 Memtable.java:498 - Completed flushing /bitnami/cassandra/data/data/system_schema/indexes-0feb57ac311f382fba6d9024d305702f/nb-1-big-Data.db (0.048KiB) for commitlog position CommitLogPosition(segmentId=1657140641952, position=42121)
INFO [MemtableFlushWriter:2] 2022-07-06 20:51:32,411 LogTransaction.java:240 - Unfinished transaction log, deleting /bitnami/cassandra/data/data/system_schema/indexes-0feb57ac311f382fba6d9024d305702f/nb_txn_flush_6aa16f30-fd6d-11ec-9792-3f8ce372cd42.log
INFO [MigrationStage:1] 2022-07-06 20:51:32,413 ColumnFamilyStore.java:878 - Enqueuing flush of tables: 92.656KiB (0%) on-heap, 0.000KiB (0%) off-heap
INFO [PerDiskMemtableFlushWriter_0:1] 2022-07-06 20:51:32,419 Memtable.java:469 - Writing Memtable-tables@1054172367(28.858KiB serialized bytes, 42 ops, 0%/0% of on/off-heap limit), flushed range = (null, null]
INFO [PerDiskMemtableFlushWriter_0:1] 2022-07-06 20:51:32,422 Memtable.java:498 - Completed flushing /bitnami/cassandra/data/data/system_schema/tables-afddfb9dbc1e30688056eed6c302ba09/nb-1-big-Data.db (17.879KiB) for commitlog position CommitLogPosition(segmentId=1657140641952, position=42121)
INFO [MemtableFlushWriter:1] 2022-07-06 20:51:32,445 LogTransaction.java:240 - Unfinished transaction log, deleting /bitnami/cassandra/data/data/system_schema/tables-afddfb9dbc1e30688056eed6c302ba09/nb_txn_flush_6aa5b4f0-fd6d-11ec-9792-3f8ce372cd42.log
INFO [MigrationStage:1] 2022-07-06 20:51:32,447 ColumnFamilyStore.java:878 - Enqueuing flush of views: 0.511KiB (0%) on-heap, 0.000KiB (0%) off-heap
INFO [PerDiskMemtableFlushWriter_0:2] 2022-07-06 20:51:32,456 Memtable.java:469 - Writing Memtable-views@1163817889(0.016KiB serialized bytes, 2 ops, 0%/0% of on/off-heap limit), flushed range = (null, null]
INFO [PerDiskMemtableFlushWriter_0:2] 2022-07-06 20:51:32,456 Memtable.java:498 - Completed flushing /bitnami/cassandra/data/data/system_schema/views-9786ac1cdd583201a7cdad556410c985/nb-1-big-Data.db (0.048KiB) for commitlog position CommitLogPosition(segmentId=1657140641952, position=42121)
INFO [MemtableFlushWriter:2] 2022-07-06 20:51:32,476 LogTransaction.java:240 - Unfinished transaction log, deleting /bitnami/cassandra/data/data/system_schema/views-9786ac1cdd583201a7cdad556410c985/nb_txn_flush_6aaa96f0-fd6d-11ec-9792-3f8ce372cd42.log
INFO [MigrationStage:1] 2022-07-06 20:51:32,478 ColumnFamilyStore.java:878 - Enqueuing flush of keyspaces: 3.243KiB (0%) on-heap, 0.000KiB (0%) off-heap
INFO [PerDiskMemtableFlushWriter_0:1] 2022-07-06 20:51:32,485 Memtable.java:469 - Writing Memtable-keyspaces@1921300021(0.673KiB serialized bytes, 7 ops, 0%/0% of on/off-heap limit), flushed range = (null, null]
INFO [PerDiskMemtableFlushWriter_0:1] 2022-07-06 20:51:32,486 Memtable.java:498 - Completed flushing /bitnami/cassandra/data/data/system_schema/keyspaces-abac5682dea631c5b535b3d6cffd0fb6/nb-1-big-Data.db (0.556KiB) for commitlog position CommitLogPosition(segmentId=1657140641952, position=42121)
INFO [MemtableFlushWriter:1] 2022-07-06 20:51:32,500 LogTransaction.java:240 - Unfinished transaction log, deleting /bitnami/cassandra/data/data/system_schema/keyspaces-abac5682dea631c5b535b3d6cffd0fb6/nb_txn_flush_6aaf78f0-fd6d-11ec-9792-3f8ce372cd42.log
INFO [MigrationStage:1] 2022-07-06 20:51:32,578 Keyspace.java:386 - Creating replication strategy system_auth params KeyspaceParams{durable_writes=true, replication=ReplicationParams{class=org.apache.cassandra.locator.SimpleStrategy, replication_factor=1}}
INFO [MigrationStage:1] 2022-07-06 20:51:32,581 ColumnFamilyStore.java:385 - Initializing system_auth.network_permissions
INFO [MigrationStage:1] 2022-07-06 20:51:32,587 ColumnFamilyStore.java:385 - Initializing system_auth.resource_role_permissons_index
INFO [MigrationStage:1] 2022-07-06 20:51:32,592 ColumnFamilyStore.java:385 - Initializing system_auth.role_members
INFO [MigrationStage:1] 2022-07-06 20:51:32,597 ColumnFamilyStore.java:385 - Initializing system_auth.role_permissions
INFO [MigrationStage:1] 2022-07-06 20:51:32,602 ColumnFamilyStore.java:385 - Initializing system_auth.roles
INFO [MigrationStage:1] 2022-07-06 20:51:32,629 Keyspace.java:386 - Creating replication strategy system_distributed params KeyspaceParams{durable_writes=true, replication=ReplicationParams{class=org.apache.cassandra.locator.SimpleStrategy, replication_factor=3}}
INFO [MigrationStage:1] 2022-07-06 20:51:32,632 ColumnFamilyStore.java:385 - Initializing system_distributed.parent_repair_history
INFO [MigrationStage:1] 2022-07-06 20:51:32,639 ColumnFamilyStore.java:385 - Initializing system_distributed.repair_history
INFO [MigrationStage:1] 2022-07-06 20:51:32,645 ColumnFamilyStore.java:385 - Initializing system_distributed.view_build_status
INFO [MigrationStage:1] 2022-07-06 20:51:32,650 Keyspace.java:386 - Creating replication strategy system_traces params KeyspaceParams{durable_writes=true, replication=ReplicationParams{class=org.apache.cassandra.locator.SimpleStrategy, replication_factor=2}}
INFO [MigrationStage:1] 2022-07-06 20:51:32,654 ColumnFamilyStore.java:385 - Initializing system_traces.events
INFO [MigrationStage:1] 2022-07-06 20:51:32,661 ColumnFamilyStore.java:385 - Initializing system_traces.sessions
INFO [main] 2022-07-06 20:51:32,716 StorageService.java:1634 - JOINING: Finish joining ring
INFO [main] 2022-07-06 20:51:32,720 ColumnFamilyStore.java:878 - Enqueuing flush of local: 0.604KiB (0%) on-heap, 0.000KiB (0%) off-heap
INFO [PerDiskMemtableFlushWriter_0:2] 2022-07-06 20:51:32,729 Memtable.java:469 - Writing Memtable-local@1624895908(0.084KiB serialized bytes, 3 ops, 0%/0% of on/off-heap limit), flushed range = (null, null]
INFO [PerDiskMemtableFlushWriter_0:2] 2022-07-06 20:51:32,730 Memtable.java:498 - Completed flushing /bitnami/cassandra/data/data/system/local-7ad54392bcdd35a684174e047860b377/nb-4-big-Data.db (0.058KiB) for commitlog position CommitLogPosition(segmentId=1657140641952, position=42277)
INFO [MemtableFlushWriter:2] 2022-07-06 20:51:32,745 LogTransaction.java:240 - Unfinished transaction log, deleting /bitnami/cassandra/data/data/system/local-7ad54392bcdd35a684174e047860b377/nb_txn_flush_6ad46610-fd6d-11ec-9792-3f8ce372cd42.log
INFO [main] 2022-07-06 20:51:32,764 ColumnFamilyStore.java:878 - Enqueuing flush of local: 47.635KiB (0%) on-heap, 0.000KiB (0%) off-heap
INFO [PerDiskMemtableFlushWriter_0:1] 2022-07-06 20:51:32,770 Memtable.java:469 - Writing Memtable-local@2091002494(8.883KiB serialized bytes, 1 ops, 0%/0% of on/off-heap limit), flushed range = (null, null]
INFO [PerDiskMemtableFlushWriter_0:1] 2022-07-06 20:51:32,771 Memtable.java:498 - Completed flushing /bitnami/cassandra/data/data/system/local-7ad54392bcdd35a684174e047860b377/nb-5-big-Data.db (5.371KiB) for commitlog position CommitLogPosition(segmentId=1657140641952, position=47812)
INFO [MemtableFlushWriter:1] 2022-07-06 20:51:32,785 LogTransaction.java:240 - Unfinished transaction log, deleting /bitnami/cassandra/data/data/system/local-7ad54392bcdd35a684174e047860b377/nb_txn_flush_6adb1cd0-fd6d-11ec-9792-3f8ce372cd42.log
INFO [main] 2022-07-06 20:51:32,807 StorageService.java:2785 - Node /10.117.50.41:7000 state jump to NORMAL
INFO [main] 2022-07-06 20:51:32,812 Gossiper.java:2214 - Waiting for gossip to settle...
INFO [main] 2022-07-06 20:51:40,812 Gossiper.java:2245 - No gossip backlog; proceeding
INFO [main] 2022-07-06 20:51:40,821 AuthCache.java:215 - (Re)initializing CredentialsCache (validity period/update interval/max entries) (2000/2000/1000)
INFO [main] 2022-07-06 20:51:40,923 NativeTransportService.java:130 - Using Netty Version: [netty-buffer=netty-buffer-4.1.58.Final.10b03e6, netty-codec=netty-codec-4.1.58.Final.10b03e6, netty-codec-dns=netty-codec-dns-4.1.58.Final.10b03e6, netty-codec-haproxy=netty-codec-haproxy-4.1.58.Final.10b03e6, netty-codec-http=netty-codec-http-4.1.58.Final.10b03e6, netty-codec-http2=netty-codec-http2-4.1.58.Final.10b03e6, netty-codec-memcache=netty-codec-memcache-4.1.58.Final.10b03e6, netty-codec-mqtt=netty-codec-mqtt-4.1.58.Final.10b03e6, netty-codec-redis=netty-codec-redis-4.1.58.Final.10b03e6, netty-codec-smtp=netty-codec-smtp-4.1.58.Final.10b03e6, netty-codec-socks=netty-codec-socks-4.1.58.Final.10b03e6, netty-codec-stomp=netty-codec-stomp-4.1.58.Final.10b03e6, netty-codec-xml=netty-codec-xml-4.1.58.Final.10b03e6, netty-common=netty-common-4.1.58.Final.10b03e6, netty-handler=netty-handler-4.1.58.Final.10b03e6, netty-handler-proxy=netty-handler-proxy-4.1.58.Final.10b03e6, netty-resolver=netty-resolver-4.1.58.Final.10b03e6, netty-resolver-dns=netty-resolver-dns-4.1.58.Final.10b03e6, netty-resolver-dns-native-macos=netty-resolver-dns-native-macos-4.1.58.Final.10b03e65f1, netty-transport=netty-transport-4.1.58.Final.10b03e6, netty-transport-native-epoll=netty-transport-native-epoll-4.1.58.Final.10b03e6, netty-transport-native-kqueue=netty-transport-native-kqueue-4.1.58.Final.10b03e65f1, netty-transport-native-unix-common=netty-transport-native-unix-common-4.1.58.Final.10b03e6, netty-transport-rxtx=netty-transport-rxtx-4.1.58.Final.10b03e6, netty-transport-sctp=netty-transport-sctp-4.1.58.Final.10b03e6, netty-transport-udt=netty-transport-udt-4.1.58.Final.10b03e6]
INFO [main] 2022-07-06 20:51:40,972 NativeTransportService.java:68 - Netty using native Epoll event loop
INFO [CompactionExecutor:2] 2022-07-06 20:51:40,978 CompactionTask.java:150 - Compacting (6fb5a2c0-fd6d-11ec-9792-3f8ce372cd42) [/bitnami/cassandra/data/data/system/local-7ad54392bcdd35a684174e047860b377/nb-2-big-Data.db:level=0, /bitnami/cassandra/data/data/system/local-7ad54392bcdd35a684174e047860b377/nb-1-big-Data.db:level=0, /bitnami/cassandra/data/data/system/local-7ad54392bcdd35a684174e047860b377/nb-3-big-Data.db:level=0, /bitnami/cassandra/data/data/system/local-7ad54392bcdd35a684174e047860b377/nb-5-big-Data.db:level=0, /bitnami/cassandra/data/data/system/local-7ad54392bcdd35a684174e047860b377/nb-4-big-Data.db:level=0, ]
INFO [main] 2022-07-06 20:51:41,074 PipelineConfigurator.java:124 - Using Netty Version: [netty-buffer=netty-buffer-4.1.58.Final.10b03e6, netty-codec=netty-codec-4.1.58.Final.10b03e6, netty-codec-dns=netty-codec-dns-4.1.58.Final.10b03e6, netty-codec-haproxy=netty-codec-haproxy-4.1.58.Final.10b03e6, netty-codec-http=netty-codec-http-4.1.58.Final.10b03e6, netty-codec-http2=netty-codec-http2-4.1.58.Final.10b03e6, netty-codec-memcache=netty-codec-memcache-4.1.58.Final.10b03e6, netty-codec-mqtt=netty-codec-mqtt-4.1.58.Final.10b03e6, netty-codec-redis=netty-codec-redis-4.1.58.Final.10b03e6, netty-codec-smtp=netty-codec-smtp-4.1.58.Final.10b03e6, netty-codec-socks=netty-codec-socks-4.1.58.Final.10b03e6, netty-codec-stomp=netty-codec-stomp-4.1.58.Final.10b03e6, netty-codec-xml=netty-codec-xml-4.1.58.Final.10b03e6, netty-common=netty-common-4.1.58.Final.10b03e6, netty-handler=netty-handler-4.1.58.Final.10b03e6, netty-handler-proxy=netty-handler-proxy-4.1.58.Final.10b03e6, netty-resolver=netty-resolver-4.1.58.Final.10b03e6, netty-resolver-dns=netty-resolver-dns-4.1.58.Final.10b03e6, netty-resolver-dns-native-macos=netty-resolver-dns-native-macos-4.1.58.Final.10b03e65f1, netty-transport=netty-transport-4.1.58.Final.10b03e6, netty-transport-native-epoll=netty-transport-native-epoll-4.1.58.Final.10b03e6, netty-transport-native-kqueue=netty-transport-native-kqueue-4.1.58.Final.10b03e65f1, netty-transport-native-unix-common=netty-transport-native-unix-common-4.1.58.Final.10b03e6, netty-transport-rxtx=netty-transport-rxtx-4.1.58.Final.10b03e6, netty-transport-sctp=netty-transport-sctp-4.1.58.Final.10b03e6, netty-transport-udt=netty-transport-udt-4.1.58.Final.10b03e6]
INFO [main] 2022-07-06 20:51:41,074 PipelineConfigurator.java:125 - Starting listening for CQL clients on /0.0.0.0:9042 (unencrypted)...
INFO [main] 2022-07-06 20:51:41,084 CassandraDaemon.java:782 - Startup complete
INFO [CompactionExecutor:2] 2022-07-06 20:51:41,193 CompactionTask.java:241 - Compacted (6fb5a2c0-fd6d-11ec-9792-3f8ce372cd42) 5 sstables to [/bitnami/cassandra/data/data/system/local-7ad54392bcdd35a684174e047860b377/nb-6-big,] to level=0. 5.224KiB to 5.085KiB (~97% of original) in 211ms. Read Throughput = 24.674KiB/s, Write Throughput = 24.019KiB/s, Row Throughput = ~2/s. 5 total partitions merged to 1. Partition merge counts were {5:1, }
INFO [NonPeriodicTasks:1] 2022-07-06 20:51:41,199 SSTable.java:111 - Deleting sstable: /bitnami/cassandra/data/data/system/local-7ad54392bcdd35a684174e047860b377/nb-4-big
INFO [NonPeriodicTasks:1] 2022-07-06 20:51:41,202 SSTable.java:111 - Deleting sstable: /bitnami/cassandra/data/data/system/local-7ad54392bcdd35a684174e047860b377/nb-2-big
INFO [NonPeriodicTasks:1] 2022-07-06 20:51:41,205 SSTable.java:111 - Deleting sstable: /bitnami/cassandra/data/data/system/local-7ad54392bcdd35a684174e047860b377/nb-1-big
INFO [NonPeriodicTasks:1] 2022-07-06 20:51:41,211 SSTable.java:111 - Deleting sstable: /bitnami/cassandra/data/data/system/local-7ad54392bcdd35a684174e047860b377/nb-3-big
INFO [NonPeriodicTasks:1] 2022-07-06 20:51:41,213 SSTable.java:111 - Deleting sstable: /bitnami/cassandra/data/data/system/local-7ad54392bcdd35a684174e047860b377/nb-5-big
INFO [OptionalTasks:1] 2022-07-06 20:51:51,027 CassandraRoleManager.java:339 - Created default superuser role 'cassandra'
bitnami/bitnami-docker-openldap:2.6.2
SHA256, SHA512 password hashes can't be used
We want to migrate our openldap database to the bitnami image but can't do the migration because of the missing module.
Include the pw-sha2 module from openldap
Not using the Bitnami openldap container
bitnami/keycloak:latest
enable --spi-login-protocol-openid-connect-legacy-logout-redirect-uri=true in file yml
or something else fix the case
solution:
https://www.keycloak.org/2022/04/keycloak-1800-released#_openid_connect_logout
bitnami/wordpress-nginx:6
The startup script should configure WordPress because the /bitnami/wordpress/wp-config.php is not existing.
The startup script is considering the app already initialized.
The error is at https://github.com/bitnami/bitnami-docker-wordpress-nginx/blob/master/6/debian-11/rootfs/opt/bitnami/scripts/libwordpress.sh#L231 since we're looking for /opt/bitnami/wordpress/wp-config.php instead of /bitnami/wordpress/wp-config.php.
/opt/bitnami/wordpress/wp-config.php is always present since it's part of the image.
bitnami/keycloak: 18.0.2
I would like to run integration/e2e tests with a throwaway Keycloak instance which does not necessarily need a standalone database.
Allow the usage of the environment variable DB_VENDOR=H2 which enables the builtin database which afaik doe snot persist its data.
This allows to save some resources when setting up a full test environment, as the Postgres database is might not be deployed.
bitnami/charts#10992 (comment)
No response
bitnami/postgresql-repmgr:11.15.0-debian-10-r65
postgresql.syncReplication=true
synchronous_commit = 'on'
synchronous_standby_names = '2 ("postgresql-ha-postgresql-0","postgresql-ha-postgresql-1","postgresql-ha-postgresql-2")'
Also, pg_stat_replication
shows sync
for both deployed replicas.
6. ✔️ Delete primary pod. A new primary is elected, and the remaining standby now follows the newly promoted primary.
7. 🔴 Synchronous replication is gone. Freshly promoted primary (former replica) is not aware of synchronous replication:
#synchronous_commit = on # synchronization level;
#synchronous_standby_names = '' # standby servers that provide sync rep
postgresql_configure_synchronous_replication
must run also on replicas so the configuration is prepared in case of promotion to primary.POSTGRESQL_FIRST_BOOT=no
).As described above - when a new primary is elected, replication is set to async. When the original primary pod is recreated, replication is set to async and can not be changed back to synchronous.
One more thing that comes to my mind: the current configuration is very strict - the loss of a single replica makes all ongoing transactions hang and wait until all replicas are back online. This behavior may not be desired in rapidly changing environments like Kubernetes - pods may fail, may be evicted, and so on. It is possible to relax the synchronous_standby_names
value by setting a lower number of replicas that need to confirm transaction, but helm chart hard-codes this value to .Values.postgresql.replicaCount - 1
. Also, it's impossible to choose between FIRST
and ANY
.
Reference: postresql.org
bitnami/kafka:2.8.0
My kafka job:
config {
image = "bitnami/kafka:2.8.0"
force_pull = true
network_mode = "host"
}
env {
BLACK_SPARKLE_AUTH_SERVICE = "https://app.${meta.environment}.site/api/v1/users/auth"
KAFKA_CFG_BROKER_ID = meta.broker_id
KAFKA_CFG_ADVERTISED_LISTENERS = "CLIENT://${NOMAD_ADDR_public},INTERNAL://${NOMAD_ADDR_private}"
KAFKA_CFG_LISTENERS = "CLIENT://${NOMAD_ADDR_public},INTERNAL://${NOMAD_ADDR_private}"
KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP = "CLIENT:SASL_SSL,INTERNAL:PLAINTEXT"
KAFKA_CFG_INTER_BROKER_LISTENER_NAME = "INTERNAL"
KAFKA_CFG_DEFAULT_REPLICATION_FACTOR = "1"
KAFKA_CFG_LOG_RETENTION_HOURS = "1"
KAFKA_CFG_DELETE_TOPIC_ENABLE = "true"
KAFKA_CFG_TLS_CLIENT_AUTH = "requested"
KAFKA_CFG_SSL_TRUSTSTORE_PASSWORD = ""
KAFKA_CFG_SASL_ENABLED_MECHANISMS = "PLAIN"
KAFKA_CFG_LISTENER_NAME_CLIENT_PLAIN_SASL_SERVER_CALLBACK_HANDLER_CLASS = "global.kafka.auth.AuthHandler"
KAFKA_CERTIFICATE_PASSWORD = ""
ALLOW_PLAINTEXT_LISTENER = "yes"
KAFKA_MOUNTED_CONF_DIR = "/local"
KAFKA_JMX_OPTS = "-javaagent:/opt/jmx-exporter.jar=${NOMAD_ADDR_exporter}:/opt/jmx.config.yml -Djava.security.auth.login.config=/local/broker_jaas.conf -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=${NOMAD_IP_exporter} -Djava.net.preferIPv4Stack=true -Dcom.sun.management.jmxremote.host=localhost"
JMX_PORT = 9999
KAFKA_ZOOKEEPER_PROTOCOL = "PLAIN"
KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL = "PLAIN"
KAFKA_ZOOKEEPER_USER = ""
KAFKA_ZOOKEEPER_PASSWORD = ""
KAFKA_INTER_BROKER_USER = ""
KAFKA_INTER_BROKER_PASSWORD = ""
}
It's a small part of my kafka job managed by nomad. Some parts are very similar to docker-compose.
When I run this job with KAFKA_ZOOKEEPER_PROTOCOL=PLAIN i got an error:
[2022-05-25 13:43:06,434] WARN SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/opt/bitnami/kafka/config/kafka_jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. (org.apache.zookeeper.ClientCnxn)
[2022-05-25 13:43:06,435] INFO Opening socket connection to server ip/ip:2181 (org.apache.zookeeper.ClientCnxn)
[2022-05-25 13:43:06,435] ERROR [ZooKeeperClient Kafka server] Auth failed. (kafka.zookeeper.ZooKeeperClient)
[2022-05-25 13:43:09,800] INFO [ZooKeeperClient Kafka server] Closing. (kafka.zookeeper.ZooKeeperClient)
Print '/opt/bitnami/kafka/config/kafka_jaas.conf' :
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="user"
password="bitnami";
};
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
user_user="bitnami";
org.apache.kafka.common.security.scram.ScramLoginModule required;
};
When I used KAFKA_ZOOKEEPER_PROTOCOL="SASL" my '/opt/bitnami/kafka/config/kafka_jaas.conf' file look like:
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="user"
password="bitnami";
};
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
user_user="bitnami";
org.apache.kafka.common.security.scram.ScramLoginModule required;
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="some_user"
password="some_password";
};
but I'm still get an error like this:
[2022-05-25 14:01:10,870] INFO Client successfully logged in. (org.apache.zookeeper.Login)
[2022-05-25 14:01:10,870] INFO Client will use DIGEST-MD5 as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2022-05-25 14:01:10,872] INFO Opening socket connection to server ip/ip:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2022-05-25 14:01:13,973] INFO [ZooKeeperClient Kafka server] Closing. (kafka.zookeeper.ZooKeeperClient)
[2022-05-25 14:01:16,756] WARN Client session timed out, have not heard from server in 6003ms for sessionid 0x0 (org.apache.zookeeper.ClientCnxn)
[2022-05-25 14:01:16,863] INFO Session: 0x0 closed (org.apache.zookeeper.ZooKeeper)
[2022-05-25 14:01:16,863] INFO EventThread shut down for session: 0x0 (org.apache.zookeeper.ClientCnxn)
[2022-05-25 14:01:16,864] INFO [ZooKeeperClient Kafka server] Closed. (kafka.zookeeper.ZooKeeperClient)
[2022-05-25 14:01:16,866] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING
at kafka.zookeeper.ZooKeeperClient.$anonfun$waitUntilConnected$3(ZooKeeperClient.scala:271)
at kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:267)
at kafka.zookeeper.ZooKeeperClient.<init>(ZooKeeperClient.scala:125)
at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1948)
at kafka.server.KafkaServer.createZkClient$1(KafkaServer.scala:431)
at kafka.server.KafkaServer.initZkClient(KafkaServer.scala:456)
at kafka.server.KafkaServer.startup(KafkaServer.scala:191)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)
[2022-05-25 14:01:16,868] INFO shutting down (kafka.server.KafkaServer)
I guess you forgot to add 'Client' section when KAFKA_ZOOKEEPER_PROTOCOL="PLAIN" is used.
No response
I guess you forgot to add 'Client' section when KAFKA_ZOOKEEPER_PROTOCOL="PLAIN" is used.
No response
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.