hazelcast / charts Goto Github PK
View Code? Open in Web Editor NEWHazelcast Official Helm Chart Repository
License: Apache License 2.0
Hazelcast Official Helm Chart Repository
License: Apache License 2.0
helm install hazelcast2 hazelcast/hazelcast
command can't create MC pod in minikube.
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/hazelcast2-0 1/1 Running 2 11m
pod/hazelcast2-1 1/1 Running 2 10m
pod/hazelcast2-2 1/1 Running 2 9m59s
pod/hazelcast2-mancenter-0 0/1 CrashLoopBackOff 3 77s
output of kubectl describe pod/hazelcast2-mancenter-0
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 11m (x2 over 11m) default-scheduler pod has unbound immediate PersistentVolumeClaims
Normal Scheduled 11m default-scheduler Successfully assigned default/hazelcast-mancenter-0 to minikube
Warning Unhealthy 8m19s kubelet, minikube Liveness probe failed: Get http://172.17.0.3:8081/health: read tcp 172.17.0.1:39992->172.17.0.3:8081: read: connection reset by peer
Normal Pulled 6m40s (x5 over 11m) kubelet, minikube Container image "hazelcast/management-center:4.0" already present on machine
Normal Created 6m40s (x5 over 11m) kubelet, minikube Created container hazelcast-mancenter
Normal Started 6m40s (x5 over 11m) kubelet, minikube Started container hazelcast-mancenter
Warning BackOff 95s (x34 over 8m16s) kubelet, minikube Back-off restarting failed container
Management Center changed the discovery, so the chart needs to be updated.
Currently the default livenessProbe
endpoint is /health/node-state
. It should actually be /health
.
The change is trivial, but before applying it we need to double check that it does not break rolling upgrade and scaling down.
I see no tagging here but we do have releases.
https://github.com/hazelcast/charts/tags
It would be good to tag repo after each release so that we can simply check previous versions in case of troubleshooting.
Hi,
The mancenter.service.type
is ClusterIP in readme, but it's LoadBalancer by default in values.yaml file. Clould you please check it?
Thanks,
When tried to set license key with jet.yaml.hazelcast.license-key
option, the chart doesn't recognize the provided key and fails to start.
When run the Hazelcast server using helm chart with service-dns value configured fails with the following error
Caused by: com.hazelcast.config.InvalidConfigurationException: Properties 'service-dns' and ('service-name' or 'service-label-name') cannot be defined at the same time
I override the config properties from the parent chart as follows
hazelcast:
cluster:
memberCount: 2
service:
clusterIP: "None"
hazelcast:
rest: true
yaml:
hazelcast:
network:
join:
multicast:
enabled: false
kubernetes:
enabled: true
service-dns: ${serviceName}
management-center:
enabled: ${hazelcast.mancenter.enabled}
url: ${hazelcast.mancenter.url}
I tried overriding the service-name property to null but it is still not working.
Hello,
I'm trying to deploy the Hazelcast Helm chart version 3.4.0 (https://hazelcast-charts.s3.amazonaws.com/) on a Kubernetes cluster (orchestrated via Rancher).
I tried different approaches to provide values for the management center (javaopts, configmap containing hazelcast-client.yaml file...), but I found no way to have the management center both:
tomcat
instead of the default one dev
)Consider that the cluster itself is up and running and nodes can find each others.
Said that, here below I provide all the files related to the management center deployment in this chart.
fullnameOverride: "hazelcast-mau"
image:
tag: "4.0.1"
cluster:
memberCount: 2
metrics:
enabled: true
rbac:
enabled: false
create: false
serviceAccount:
create: false
metrics:
enabled: false
mancenter:
enabled: true
image:
tag: "4.0.2"
javaOpts: "-Dhazelcast.mc.phone.home.enabled=false"
persistence:
enabled: true
storageClass: "bronze"
size: 2Gi
service:
type: ClusterIP
ingress:
enabled: true
hosts:
- "hazelcast-mau-mancenter.mau-test.test.swissid.xyz"
hazelcast:
# In this configmap, the Hazelcast cluster configuration is set
existingConfigMap: hazelcast-configmap
mancenter:
existingConfigMap: hazelcast-mancenter-configmap
# Force to use DNS Lookup strategy
javaOpts: "-Dhazelcast.kubernetes.service-dns=hazelcast-mau.mau-test.svc.cluster.local"
hazelcast-mancenter-configmap
ConfigMap, wherein hazelcast-client.yaml
file is stored:
apiVersion: v1
kind: ConfigMap
metadata:
annotations:
field.cattle.io/projectId: "c-tddvl:p-s55rc"
name: hazelcast-mancenter-configmap
namespace: mau-test
data:
hazelcast-client.yaml: |-
hazelcast-client:
cluster-name: tomcat
network:
kubernetes:
enabled: true
Having such configuration, the management center complains as:
using automatic sizing of heap size by up to 80% of available memory and starting with container support
executing command specified by MC_INIT_CMD for container initialization
Successfully added Cluster Config.
##################################################
# initialisation complete, starting now....
##################################################
+ exec java --add-opens java.base/java.lang=ALL-UNNAMED -server -Dhazelcast.mc.home=/data -Djava.net.preferIPv4Stack=true -Dhazelcast.mc.healthCheck.enable=true -DserviceName=hazelcast-mau -Dhazelcast.mc.tls.enabled=false -Dhazelcast.kubernetes.service-dns=hazelcast-mau.mau-test.svc.cluster.local -XX:+UseContainerSupport -XX:MaxRAMPercentage=80 -cp /opt/hazelcast/management-center/hazelcast-management-center-4.0.2.war -Dhazelcast.mc.contextPath=/ -Dhazelcast.mc.http.port=8080 -Dhazelcast.mc.https.port=8443 com.hazelcast.webmonitor.Launcher
2020-06-16 10:25:03 [main INFO c.h.webmonitor.config.BuildInfo - Hazelcast Management Center 4.0.2
2020-06-16 10:25:03 [main INFO com.hazelcast.webmonitor.Launcher - Health check is enabled and available at http://localhost:8081/health
2020-06-16 10:25:07 [main INFO c.h.webmonitor.config.SqlDbConfig - Checking DB for required migrations.
2020-06-16 10:25:07 [main INFO c.h.webmonitor.config.SqlDbConfig - Number of applied DB migrations: 0.
2020-06-16 10:25:07 [main INFO c.h.webmonitor.config.AppConfig - Creating cache with maxSize=768
2020-06-16 10:25:07 [main INFO c.h.w.storage.DiskUsageMonitor - Monitoring /data [mode=purge, interval=1000ms, limit=512 MB]
2020-06-16 10:25:07 [main INFO c.h.w.s.s.impl.DisableLoginStrategy - Login will be disabled for 5 seconds after 3 failed login attempts. For every 3 consecutive failed login attempts, disable period will be multiplied by 10.
2020-06-16 10:25:07 [main INFO c.h.i.m.impl.MetricsConfigHelper - MC-Client-tomcat [tomcat [4.0.1 Overridden metrics configuration with system property 'hazelcast.client.metrics.enabled'='false' -> 'ClientMetricsConfig.enabled'='false'
2020-06-16 10:25:08 [main ERROR c.h.w.service.ClusterManager - Failed to start client for cluster tomcat.
com.hazelcast.config.InvalidConfigurationException: Invalid configuration
at com.hazelcast.spi.discovery.impl.DefaultDiscoveryService.loadDiscoveryStrategies(DefaultDiscoveryService.java:147)
at com.hazelcast.spi.discovery.impl.DefaultDiscoveryService.<init>(DefaultDiscoveryService.java:57)
at com.hazelcast.spi.discovery.impl.DefaultDiscoveryServiceProvider.newDiscoveryService(DefaultDiscoveryServiceProvider.java:29)
at com.hazelcast.client.impl.clientside.ClusterDiscoveryServiceBuilder.initDiscoveryService(ClusterDiscoveryServiceBuilder.java:246)
at com.hazelcast.client.impl.clientside.ClusterDiscoveryServiceBuilder.build(ClusterDiscoveryServiceBuilder.java:99)
at com.hazelcast.client.impl.clientside.HazelcastClientInstanceImpl.initClusterDiscoveryService(HazelcastClientInstanceImpl.java:285)
at com.hazelcast.client.impl.clientside.HazelcastClientInstanceImpl.<init>(HazelcastClientInstanceImpl.java:242)
at com.hazelcast.client.HazelcastClient.constructHazelcastClient(HazelcastClient.java:458)
at com.hazelcast.client.HazelcastClient.newHazelcastClientInternal(HazelcastClient.java:416)
at com.hazelcast.client.HazelcastClient.newHazelcastClient(HazelcastClient.java:136)
at com.hazelcast.webmonitor.service.client.ImdgClientManager.newClient(ImdgClientManager.java:122)
at com.hazelcast.webmonitor.service.ClusterManager.newClient(ClusterManager.java:203)
at com.hazelcast.webmonitor.service.ClusterManager.lambda$new$0(ClusterManager.java:74)
at java.base/java.util.ArrayList.forEach(Unknown Source)
at com.hazelcast.webmonitor.service.ClusterManager.<init>(ClusterManager.java:69)
CUT CUT CUT CUT
Caused by: com.hazelcast.config.properties.ValidationException: There is no discovery strategy factory to create 'DiscoveryStrategyConfig{properties={}, className='com.hazelcast.kubernetes.HazelcastKubernetesDiscoveryStrategy', discoveryStrategyFactory=null}' Is it a typo in a strategy classname? Perhaps you forgot to include implementation on a classpath?
at com.hazelcast.spi.discovery.impl.DefaultDiscoveryService.buildDiscoveryStrategy(DefaultDiscoveryService.java:186)
at com.hazelcast.spi.discovery.impl.DefaultDiscoveryService.loadDiscoveryStrategies(DefaultDiscoveryService.java:141)
... 59 common frames omitted
MC_INIT_CMD
: ./mc-conf.sh cluster add --lenient=true -H /data -cc /config/hazelcast-client.yaml
So here you can see it took correctly tomcat
as cluster name (see the content of the hazelcast-client.yaml
file provided in ConfigMap hazelcast-mancenter-configmap
), but it complains about an invalid (?) configuration.
I tried another approach: remove the entry mancenter.existingConfigMap
from the Helm values yaml file (leaving the javaOpts
one with the same value), but this time the management center complains about a not correct authentication:
using automatic sizing of heap size by up to 80% of available memory and starting with container support
executing command specified by MC_INIT_CMD for container initialization
Successfully added Cluster Config.
##################################################
# initialisation complete, starting now....
##################################################
+ exec java --add-opens java.base/java.lang=ALL-UNNAMED -server -Dhazelcast.mc.home=/data -Djava.net.preferIPv4Stack=true -Dhazelcast.mc.healthCheck.enable=true -DserviceName=hazelcast-mau -Dhazelcast.mc.tls.enabled=false -Dhazelcast.kubernetes.service-dns=hazelcast-mau.mau-test.svc.cluster.local -XX:+UseContainerSupport -XX:MaxRAMPercentage=80 -cp /opt/hazelcast/management-center/hazelcast-management-center-4.0.2.war -Dhazelcast.mc.contextPath=/ -Dhazelcast.mc.http.port=8080 -Dhazelcast.mc.https.port=8443 com.hazelcast.webmonitor.Launcher
2020-06-16 10:50:00 [main INFO c.h.webmonitor.config.BuildInfo - Hazelcast Management Center 4.0.2
2020-06-16 10:50:01 [main INFO com.hazelcast.webmonitor.Launcher - Health check is enabled and available at http://localhost:8081/health
2020-06-16 10:50:04 [main INFO c.h.webmonitor.config.SqlDbConfig - Checking DB for required migrations.
2020-06-16 10:50:04 [main INFO c.h.webmonitor.config.SqlDbConfig - Number of applied DB migrations: 0.
2020-06-16 10:50:04 [main INFO c.h.webmonitor.config.AppConfig - Creating cache with maxSize=768
2020-06-16 10:50:04 [main INFO c.h.w.storage.DiskUsageMonitor - Monitoring /data [mode=purge, interval=1000ms, limit=512 MB]
2020-06-16 10:50:04 [main INFO c.h.w.s.s.impl.DisableLoginStrategy - Login will be disabled for 5 seconds after 3 failed login attempts. For every 3 consecutive failed login attempts, disable period will be multiplied by 10.
2020-06-16 10:50:05 [main INFO c.h.i.m.impl.MetricsConfigHelper - MC-Client-dev [dev [4.0.1 Overridden metrics configuration with system property 'hazelcast.client.metrics.enabled'='false' -> 'ClientMetricsConfig.enabled'='false'
2020-06-16 10:50:05 [main INFO c.h.c.i.spi.ClientInvocationService - MC-Client-dev [dev [4.0.1 Running with 2 response threads, dynamic=true
2020-06-16 10:50:05 [main INFO com.hazelcast.core.LifecycleService - MC-Client-dev [dev [4.0.1 HazelcastClient 4.0.1 (20200409 - eb984ad, e086b9c) is STARTING
2020-06-16 10:50:05 [main INFO com.hazelcast.core.LifecycleService - MC-Client-dev [dev [4.0.1 HazelcastClient 4.0.1 (20200409 - eb984ad, e086b9c) is STARTED
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.hazelcast.internal.networking.nio.SelectorOptimizer (file:/opt/hazelcast/management-center/hazelcast-management-center-4.0.2.war) to field sun.nio.ch.SelectorImpl.selectedKeys
WARNING: Please consider reporting this to the maintainers of com.hazelcast.internal.networking.nio.SelectorOptimizer
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2020-06-16 10:50:05 [MC-Client-dev.internal-1 INFO c.h.c.i.c.ClientConnectionManager - MC-Client-dev [dev [4.0.1 Trying to connect to cluster: dev
2020-06-16 10:50:05 [main INFO c.h.internal.diagnostics.Diagnostics - MC-Client-dev [dev [4.0.1 Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
2020-06-16 10:50:05 [MC-Client-dev.internal-1 WARN c.h.c.i.c.ClientConnectionManager - MC-Client-dev [dev [4.0.1 Unable to get live cluster connection, retry in 1000 ms, attempt: 1 , cluster connect timeout: 9223372036854775807 seconds , max backoff millis: 32000
2020-06-16 10:50:05 [main INFO com.hazelcast.webmonitor.Launcher -
Hazelcast Management Center successfully started at http://localhost:8080/
2020-06-16 10:50:06 [MC-Client-dev.internal-1 WARN c.h.c.i.c.ClientConnectionManager - MC-Client-dev [dev [4.0.1 Unable to get live cluster connection, retry in 2000 ms, attempt: 2 , cluster connect timeout: 9223372036854775807 seconds , max backoff millis: 32000
2020-06-16 10:50:08 [MC-Client-dev.internal-1 WARN c.h.c.i.c.ClientConnectionManager - MC-Client-dev [dev [4.0.1 Unable to get live cluster connection, retry in 4000 ms, attempt: 3 , cluster connect timeout: 9223372036854775807 seconds , max backoff millis: 32000
2020-06-16 10:50:12 [MC-Client-dev.internal-1 WARN c.h.c.i.c.ClientConnectionManager - MC-Client-dev [dev [4.0.1 Unable to get live cluster connection, retry in 8000 ms, attempt: 4 , cluster connect timeout: 9223372036854775807 seconds , max backoff millis: 32000
2020-06-16 10:50:20 [MC-Client-dev.internal-1 INFO c.h.c.i.c.ClientConnectionManager - MC-Client-dev [dev [4.0.1 Trying to connect to [hazelcast-mau:5701
2020-06-16 10:50:20 [MC-Client-dev.internal-1 WARN c.h.c.i.c.nio.ClientConnection - MC-Client-dev [dev [4.0.1 ClientConnection{alive=false, connectionId=1, channel=NioChannel{/172.24.9.185:40709->hazelcast-mau/172.24.9.15:5701}, remoteEndpoint=null, lastReadTime=2020-06-16 10:50:20.331, lastWriteTime=2020-06-16 10:50:20.324, closedTime=2020-06-16 10:50:20.333, connected server version=null} closed. Reason: Failed to authenticate connection
com.hazelcast.client.AuthenticationException: Authentication failed. The configured cluster name on the client (see ClientConfig.setClusterName()) does not match the one configured in the cluster or the credentials set in the Client security config could not be authenticated
at com.hazelcast.client.impl.connection.nio.ClientConnectionManagerImpl.authenticateOnCluster(ClientConnectionManagerImpl.java:793)
at com.hazelcast.client.impl.connection.nio.ClientConnectionManagerImpl.getOrConnect(ClientConnectionManagerImpl.java:581)
at com.hazelcast.client.impl.connection.nio.ClientConnectionManagerImpl.connect(ClientConnectionManagerImpl.java:423)
at com.hazelcast.client.impl.connection.nio.ClientConnectionManagerImpl.doConnectToCandidateCluster(ClientConnectionManagerImpl.java:451)
at com.hazelcast.client.impl.connection.nio.ClientConnectionManagerImpl.doConnectToCluster(ClientConnectionManagerImpl.java:385)
at com.hazelcast.client.impl.connection.nio.ClientConnectionManagerImpl.lambda$submitConnectToClusterTask$1(ClientConnectionManagerImpl.java:359)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:64)
at com.hazelcast.internal.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:80)
2020-06-16 10:50:20 [MC-Client-dev.internal-1 WARN c.h.c.i.c.ClientConnectionManager - MC-Client-dev [dev [4.0.1 Exception during initial connection to [hazelcast-mau:5701: com.hazelcast.client.AuthenticationException: Authentication failed. The configured cluster name on the client (see ClientConfig.setClusterName()) does not match the one configured in the cluster or the credentials set in the Client security config could not be authenticated
2020-06-16 10:50:20 [MC-Client-dev.internal-1 INFO c.h.c.i.c.ClientConnectionManager - MC-Client-dev [dev [4.0.1 Trying to connect to [hazelcast-mau:5703
This seems to be correct, since here the cluster name is dev
(default) and not tomcat
.
I tried a final approach: put back the ConfigMap hazelcast-mancenter-configmap
containing the hazelcast-client.yaml
file, but this time with this content:
apiVersion: v1
kind: ConfigMap
metadata:
annotations:
field.cattle.io/projectId: "c-tddvl:p-s55rc"
name: hazelcast-mancenter-configmap
namespace: mau-test
data:
hazelcast-client.yaml: |-
hazelcast-client:
cluster-name: tomcat
Here the whole network.kubernetes.enabled: true
is now not being provided.
So, the management center is now correctly trying to connect to a cluster having name tomcat
, but this time looking on 127.0.0.1:
using automatic sizing of heap size by up to 80% of available memory and starting with container support
executing command specified by MC_INIT_CMD for container initialization
Successfully added Cluster Config.
##################################################
# initialisation complete, starting now....
##################################################
+ exec java --add-opens java.base/java.lang=ALL-UNNAMED -server -Dhazelcast.mc.home=/data -Djava.net.preferIPv4Stack=true -Dhazelcast.mc.healthCheck.enable=true -DserviceName=hazelcast-mau -Dhazelcast.mc.tls.enabled=false -Dhazelcast.kubernetes.service-dns=hazelcast-mau.mau-test.svc.cluster.local -XX:+UseContainerSupport -XX:MaxRAMPercentage=80 -cp /opt/hazelcast/management-center/hazelcast-management-center-4.0.2.war -Dhazelcast.mc.contextPath=/ -Dhazelcast.mc.http.port=8080 -Dhazelcast.mc.https.port=8443 com.hazelcast.webmonitor.Launcher
2020-06-16 10:55:32 [main INFO c.h.webmonitor.config.BuildInfo - Hazelcast Management Center 4.0.2
2020-06-16 10:55:33 [main INFO com.hazelcast.webmonitor.Launcher - Health check is enabled and available at http://localhost:8081/health
2020-06-16 10:55:36 [main INFO c.h.webmonitor.config.SqlDbConfig - Checking DB for required migrations.
2020-06-16 10:55:36 [main INFO c.h.webmonitor.config.SqlDbConfig - Number of applied DB migrations: 0.
2020-06-16 10:55:36 [main INFO c.h.webmonitor.config.AppConfig - Creating cache with maxSize=768
2020-06-16 10:55:36 [main INFO c.h.w.storage.DiskUsageMonitor - Monitoring /data [mode=purge, interval=1000ms, limit=512 MB]
2020-06-16 10:55:36 [main INFO c.h.w.s.s.impl.DisableLoginStrategy - Login will be disabled for 5 seconds after 3 failed login attempts. For every 3 consecutive failed login attempts, disable period will be multiplied by 10.
2020-06-16 10:55:37 [main INFO c.h.i.m.impl.MetricsConfigHelper - MC-Client-tomcat [tomcat [4.0.1 Overridden metrics configuration with system property 'hazelcast.client.metrics.enabled'='false' -> 'ClientMetricsConfig.enabled'='false'
2020-06-16 10:55:37 [main INFO c.h.c.i.spi.ClientInvocationService - MC-Client-tomcat [tomcat [4.0.1 Running with 2 response threads, dynamic=true
2020-06-16 10:55:37 [main INFO com.hazelcast.core.LifecycleService - MC-Client-tomcat [tomcat [4.0.1 HazelcastClient 4.0.1 (20200409 - eb984ad, e086b9c) is STARTING
2020-06-16 10:55:37 [main INFO com.hazelcast.core.LifecycleService - MC-Client-tomcat [tomcat [4.0.1 HazelcastClient 4.0.1 (20200409 - eb984ad, e086b9c) is STARTED
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.hazelcast.internal.networking.nio.SelectorOptimizer (file:/opt/hazelcast/management-center/hazelcast-management-center-4.0.2.war) to field sun.nio.ch.SelectorImpl.selectedKeys
WARNING: Please consider reporting this to the maintainers of com.hazelcast.internal.networking.nio.SelectorOptimizer
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2020-06-16 10:55:37 [MC-Client-tomcat.internal-1 INFO c.h.c.i.c.ClientConnectionManager - MC-Client-tomcat [tomcat [4.0.1 Trying to connect to cluster: tomcat
2020-06-16 10:55:37 [main INFO c.h.internal.diagnostics.Diagnostics - MC-Client-tomcat [tomcat [4.0.1 Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
2020-06-16 10:55:37 [MC-Client-tomcat.internal-1 INFO c.h.c.i.c.ClientConnectionManager - MC-Client-tomcat [tomcat [4.0.1 Trying to connect to [127.0.0.1:5701
2020-06-16 10:55:37 [MC-Client-tomcat.internal-1 WARN c.h.c.i.c.ClientConnectionManager - MC-Client-tomcat [tomcat [4.0.1 Exception during initial connection to [127.0.0.1:5701: com.hazelcast.core.HazelcastException: java.net.SocketException: Connection refused to address /127.0.0.1:5701
2020-06-16 10:55:37 [MC-Client-tomcat.internal-1 INFO c.h.c.i.c.ClientConnectionManager - MC-Client-tomcat [tomcat [4.0.1 Trying to connect to [127.0.0.1:5702
2020-06-16 10:55:37 [MC-Client-tomcat.internal-1 WARN c.h.c.i.c.ClientConnectionManager - MC-Client-tomcat [tomcat [4.0.1 Exception during initial connection to [127.0.0.1:5702: com.hazelcast.core.HazelcastException: java.net.SocketException: Connection refused to address /127.0.0.1:5702
2020-06-16 10:55:37 [MC-Client-tomcat.internal-1 INFO c.h.c.i.c.ClientConnectionManager - MC-Client-tomcat [tomcat [4.0.1 Trying to connect to [127.0.0.1:5703
2020-06-16 10:55:37 [MC-Client-tomcat.internal-1 WARN c.h.c.i.c.ClientConnectionManager - MC-Client-tomcat [tomcat [4.0.1 Exception during initial connection to [127.0.0.1:5703: com.hazelcast.core.HazelcastException: java.net.SocketException: Connection refused to address /127.0.0.1:5703
2020-06-16 10:55:37 [MC-Client-tomcat.internal-1 WARN c.h.c.i.c.ClientConnectionManager - MC-Client-tomcat [tomcat [4.0.1 Unable to get live cluster connection, retry in 1000 ms, attempt: 1 , cluster connect timeout: 9223372036854775807 seconds , max backoff millis: 32000
So, concluding: is this a configuration issue or is effectively a bug?
I honestly find no other ways to configure together both these parameters and I was not able to find any relevant hint on the documentation.
Thank you for your time.
When ConfigMap is changed helm upgrade
should restart the Pods.
Comment from @hasancelik:
It seems helm supports it:
https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments
We can apply below logic into our statefulset.yaml files, i think:
https://github.com/helm/charts/blob/1ccfc8be4f3ca5f26b991f7e1d2eaccd9bbefadf/stable/grafana/templates/statefulset.yaml#L24
In this file (https://github.com/hazelcast/charts/blob/master/stable/hazelcast-enterprise/templates/service.yaml
) I would like to be able to set a static node port.
Plz add:
nodePort: {{ .Values.service.nodePort }}
When we install hazelcast helm charts, we have to signup and then login all the time after the installation. Instead of doing that, helm chart can create a admin user and keep a generated password in a Secret.
This is a sample Dockerfile which creates a builtin user.
https://github.com/hazelcast/hazelcast-docker-samples/blob/master/management-center-built-in-user/Dockerfile#L5
Follow-up changes on the configurations and apply them to the charts accordingly
When someone from community sends PR to our official helm chart, we need to port it to this repo:
#60
helm/charts#17193
Even if developer copies changes from official repo without any modification, git
can find unimportant diffs(whitespace etc). In such cases, the create-pr-at-official-helm-repo
script should not create nonsense PR at official repo.
To prevent it, developer can put special string like [not-sync]
into commit message so create-pr-at-official-helm-repo
script can parse it. WDYT? @leszko @eminn
Hazelcast uses Kubernetes Service to discover other Hazelcast Members so no need to have a LoadBalancer or clusterIP as service type. It must be Headless by default and it is actually recommended way to use
https://kubernetes.io/docs/concepts/configuration/overview/#services
Use headless Services (which have a ClusterIP of None) for easy service discovery when you don't need kube-proxy load balancing.
I also experienced that Hazelcast can only work with headless Services with Istio Service Mesh.
https://github.com/hazelcast-guides/hazelcast-istio
How to reproduce
Create a keystore with key and certificates
kubectl create secret generic keystore --from-file=key.pem --from-file=chain.pem --from-file=cert.pem
Install hazelcast-enterprise with the following command
helm install --name hazelcast-openssl \
--set hazelcast.licenseKey=<license-key> \
--set hazelcast.ssl=true \
--set secretsMountName=keystore \
--set hazelcast.yaml.hazelcast.network.ssl.factory-class-name=com.hazelcast.nio.ssl.OpenSSLEngineFactory \
--set hazelcast.yaml.hazelcast.network.ssl.properties.keyFile=/data/secrets/key.pem \
--set hazelcast.yaml.hazelcast.network.ssl.properties.trustCertCollectionFile=/data/secrets/cert.pem \
--set hazelcast.yaml.hazelcast.network.ssl.properties.keyCertChainFile=/data/secrets/chain.pem \
hazelcast/hazelcast-enterprise
The exception on the hazelcast member log says readonly filesystem.
Caused by: java.io.IOException: Read-only file system
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createTempFile(File.java:2024)
at io.netty.util.internal.NativeLibraryLoader.load(NativeLibraryLoader.java:183)
Workaround
Making this explicitly readOnlyRootFilesystem: false
fixes the problem.
As I user, I would like to configure logging by specifying a logging framework (eg. log4j, log4j2, logback) and a configuration. Some of the logging framework libraries can also be included in the lib
jars.
It would be nice to have an opt-in ingress for the man-center in https://github.com/hazelcast/charts/tree/master/stable/hazelcast/templates
With Helm 2;
customVolume
was working until this commit and stopped working with this commit. You can simply verify it by;
helm upgrade --install my-chart \
--set hazelcast.licenseKey=<key> \
--set customVolume.hostPath.path=/tmp/ \
hazelcast/hazelcast-enterprise --version 3.4.4 --debug --dry-run
It will throw Error: YAML parse error on hazelcast-enterprise/templates/statefulset.yaml: error converting YAML to JSON: yaml: line 101: mapping values are not allowed in this context
. Here line 101 is not 101th line in yaml file, it is relative line number in dry run output which is equal to
volumes:
- name: hazelcast-storage
configMap:
name: huseyin-hz-hazelcast-enterprise-configuration
- name: hazelcast-custom
hostPath:
path: /tmp/
Reported by @neilstevenson.
The Helm chart only creates non-lite members.
It would be useful if it could create lite-members also.
We already have a lot of similar Helm Charts:
We need to create some tooling to maintain them effectively. An idea would be to create a tool that makes the changes everywhere from a *.patch
file. Or a tool to quickly give a diff of the Charts.
We should also research what other tools do (e.g. MySQL).
Remove this line:
hub api repos/${HELM_REPOSITORY}/issues/${OFFICIAL_HELM_PR_ISSUE_NUMBER}/comments -f body='Please approve @hasancelik @leszko @mesutcelik @googlielmo @eminn'
and add below part:
hub api repos/${HELM_REPOSITORY}/issues/${OFFICIAL_HELM_PR_ISSUE_NUMBER}/comments -f body='/ok-to-test'
hub api repos/${HELM_REPOSITORY}/issues/${OFFICIAL_HELM_PR_ISSUE_NUMBER}/comments -f body='/lgtm'
export GITHUB_TOKEN=${APPROVER_GITHUB_TOKEN}
hub api repos/${HELM_REPOSITORY}/issues/${OFFICIAL_HELM_PR_ISSUE_NUMBER}/comments -f body='/lgtm'
The ingress does not work because the ingress path is not defined, the path should be set to: /hazelcast-mancenter
It might be a nicer experience if more config settings could be set via parameters to the Helm install rather than in the values.yaml or hazelcast.xml. These commands with often be run by ops teams and this would give a better experience and make scripting easier.
mancenter.persistence.enabled
parameter is used to create Persistence Volume for MC but we need to create PV everytime MC is deployed with helm chart.
I believe we should remove mancenter.persistence.enabled
. @emre-aydin What do you think?
Currently, to mount a volume, which contains custom JARs, keystore/trustore, a user needs to modify templates/*. Allow doing it just by modifying values.yaml
.
Currently, when you change only README (so you don't want to bump up the chart version and therefore make a release), Jenkins PR Builder fails.
Problem: If the target endpoint is set to an name based value either a FQDN or a SVC name the member autodiscovery doen work. Though it does work when using a IP based host name target-endpoints: "XX.239.105.XX:30XXX"
The reason for this request is a simplyfied setup for HA.
Hi,
the issue is that the statefull sets do not terminate instantanly or fast. We want to configure it. The default is set in the cluster 600 which might be a long time.
This is a minor security and configuration improvement. When setting the the Service type for the mancenter to Loadbalancer it open per default a Nodeport in the firewall. This could be consired a security risk even though you can change it to ClusterIP. The request is to set the mancenter default to type ClusterIP because it is also the kubernetes default.
https://github.com/hazelcast/charts/blob/master/stable/hazelcast-enterprise/values.yaml#L336
Using VolumeClaimTemplates
(as described here) would make each Hazelcast member receive a separate PersistentVolume
for the Hot Restart directory. That would enable using ReadWriteOnce
PersistentVolumes. Currently we can use only use ReadWriteMany
.
Hi, am sri I have installed hazelcast using helm in my eks cluster in dev namespace previously its running 2 members and 1 mancentre after we are try to deploy another 2 members and 1 mancentre with some different name in same namespace But i getting some errors in the members log files Please help me for this
Please check me the logs am getting from hazelcast member
exec java -server -javaagent:/opt/hazelcast/lib/jmx_prometheus_javaagent.jar=8080:/opt/hazelcast/jmx_agent_config.yaml -Dhazelcast.mancenter.enabled=false -Djava.net.preferIPv4Stack=true -Djava.util.logging.config.file=/opt/hazelcast/logging.properties -Dhazelcast.config=/data/hazelcast/hazelcast.yaml -DserviceName=hazelcast-dev-new -Dnamespace=dev -Dhazelcast.mancenter.enabled=true -Dhazelcast.mancenter.url=http://hazelcast-dev-new-mancenter:8080/hazelcast-mancenter -Dhazelcast.shutdownhook.policy=GRACEFUL -Dhazelcast.shutdownhook.enabled=true -Dhazelcast.graceful.shutdown.max.wait=600 -Dhazelcast.jmx=true com.hazelcast.core.server.StartServer
Dec 12, 2019 6:18:39 AM com.hazelcast.config.AbstractConfigLocator
INFO: Loading configuration '/data/hazelcast/hazelcast.yaml' from System property 'hazelcast.config'
Dec 12, 2019 6:18:39 AM com.hazelcast.config.AbstractConfigLocator
INFO: Using configuration file at /data/hazelcast/hazelcast.yaml
Dec 12, 2019 6:18:40 AM com.hazelcast.instance.AddressPicker
INFO: [LOCAL] [dev] [3.12] Prefer IPv4 stack is true, prefer IPv6 addresses is false
Dec 12, 2019 6:18:40 AM com.hazelcast.instance.AddressPicker
INFO: [LOCAL] [dev] [3.12] Picked [ip]:5701, using socket ServerSocket[addr=/0.0.0.0,localport=5701], bind any local is true
Dec 12, 2019 6:18:40 AM com.hazelcast.system
INFO: [ip]:5701 [dev] [3.12] Hazelcast 3.12 (20190409 - 915d83a) starting at [ip]:5701
Dec 12, 2019 6:18:40 AM com.hazelcast.system
INFO: [ip]:5701 [dev] [3.12] Copyright (c) 2008-2019, Hazelcast, Inc. All Rights Reserved.
Dec 12, 2019 6:18:40 AM com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulator
INFO: [ip]:5701 [dev] [3.12] Backpressure is disabled
Dec 12, 2019 6:18:40 AM com.hazelcast.spi.discovery.integration.DiscoveryService
INFO: [ip]:5701 [dev] [3.12] Kubernetes Discovery properties: { service-dns: null, service-dns-timeout: 5, service-name: hazelcast-dev-new, service-port: 0, service-label: null, service-label-value: true, namespace: dev, resolve-not-ready-addresses: true, kubernetes-master: https://kubernetes.default.svc}
Dec 12, 2019 6:18:40 AM com.hazelcast.spi.discovery.integration.DiscoveryService
INFO: [ip]:5701 [dev] [3.12] Kubernetes Discovery activated resolver: KubernetesApiEndpointResolver
Dec 12, 2019 6:18:40 AM com.hazelcast.instance.Node
INFO: [ip]:5701 [dev] [3.12] Activating Discovery SPI Joiner
Dec 12, 2019 6:18:41 AM com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl
INFO: [ip]:5701 [dev] [3.12] Starting 2 partition threads and 3 generic threads (1 dedicated for priority tasks)
Dec 12, 2019 6:18:41 AM com.hazelcast.internal.diagnostics.Diagnostics
INFO: [ip]:5701 [dev] [3.12] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
Dec 12, 2019 6:18:41 AM com.hazelcast.core.LifecycleService
INFO: [ip]:5701 [dev] [3.12] [ip]:5701 is STARTING
Dec 12, 2019 6:18:41 AM com.hazelcast.kubernetes.KubernetesClient
WARNING: Cannot fetch public IPs of Hazelcast Member PODs, you won't be able to use Hazelcast Smart Client from outside of the Kubernetes network
Dec 12, 2019 6:18:47 AM com.hazelcast.internal.cluster.ClusterService
INFO: [ip]:5701 [dev] [3.12]
Members {size:1, ver:1} [
Member [ip]:5701 - 45c847bc-cd06-43eb-b2bb-709f2e849e3f this
]
Dec 12, 2019 6:18:47 AM com.hazelcast.internal.management.ManagementCenterService
INFO: [ip]:5701 [dev] [3.12] Hazelcast will connect to Hazelcast Management Center on address:
http://hazelcast-mancenter:8080/hazelcast-mancenter
Dec 12, 2019 6:18:47 AM com.hazelcast.internal.jmx.ManagementService
INFO: [ip]:5701 [dev] [3.12] Hazelcast JMX agent enabled.
Dec 12, 2019 6:18:47 AM com.hazelcast.core.LifecycleService
INFO: [ip]:5701 [dev] [3.12] [ip]:5701 is STARTED
Dec 12, 2019 6:18:52 AM com.hazelcast.internal.management.ManagementCenterService
INFO: [ip]:5701 [dev] [3.12] Failed to connect to: http://hazelcast-dev-new-mancenter:8080/hazelcast-mancenter/collector.do
Dec 12, 2019 6:18:52 AM com.hazelcast.client.impl.ClientEngine
INFO: [ip]:5701 [dev] [3.12] Applying a new client selector :ClientSelector{any}
Dec 12, 2019 6:19:20 AM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [ip]:5701 [dev] [3.12] Initialized new cluster connection between /ip:5701 and /ip:44171
Dec 12, 2019 6:19:20 AM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [ip]:5701 [dev] [3.12] Connection[id=4, /ip:5701->/ip:43528, qualifier=null, endpoint=null, alive=false, type=NONE] closed. Reason: Unsupported command received on REST API handler.
Dec 12, 2019 6:19:26 AM com.hazelcast.internal.cluster.ClusterService
INFO: [ip]:5701 [dev] [3.12]
Members {size:2, ver:2} [
Member [ip]:5701 - 45c847bc-cd06-43eb-b2bb-709f2e849e3f this
Member [ip]:5701 - c95eae99-b779-42a5-bf34-d21dd4bc64ae
]
Dec 12, 2019 6:19:37 AM com.hazelcast.internal.management.ManagementCenterService
INFO: [ip]:5701 [dev] [3.12] Connection to Management Center restored.
Dec 12, 2019 6:19:37 AM com.hazelcast.client.impl.ClientEngine
INFO: [ip]:5701 [dev] [3.12] Applying a new client selector :ClientSelector{any}
Dec 12, 2019 6:20:20 AM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [ip]:5701 [dev] [3.12] Connection[id=17, /ip:5701->/ip:44708, qualifier=null, endpoint=null, alive=false, type=NONE] closed. Reason: Unsupported command received on REST API handler.
Dec 12, 2019 6:20:57 AM com.hazelcast.internal.management.ManagementCenterService
INFO: [ip]:5701 [dev] [3.12] Failed to pull tasks from Management Center
Dec 12, 2019 6:21:20 AM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [ip]:5701 [dev] [3.12] Connection[id=30, /ip:5701->/ip:46236, qualifier=null, endpoint=null, alive=false, type=NONE] closed. Reason: Unsupported command received on REST API handler.
Dec 12, 2019 6:22:20 AM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [ip]:5701 [dev] [3.12] Connection[id=43, /ip:5701->/ip:47788, qualifier=null, endpoint=null, alive=false, type=NONE] closed. Reason: Unsupported command received on REST API handler.
Dec 12, 2019 6:23:20 AM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [ip]:5701 [dev] [3.12] Connection[id=56, /ip:5701->/ip:49416, qualifier=null, endpoint=null, alive=false, type=NONE] closed. Reason: Unsupported command received on REST API handler.
Dec 12, 2019 6:24:20 AM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [ip]:5701 [dev] [3.12] Connection[id=69, /ip:5701->/ip:51034, qualifier=null, endpoint=null, alive=false, type=NONE] closed. Reason: Unsupported command received on REST API handler.
According to the operations guidelines, the CPU Sizing is recommended to be 8 cores at a minimum.
How does this apply to a Kubernetes/OpenShift environment?
This chart https://github.com/hazelcast/charts/blob/master/stable/hazelcast-enterprise/templates/mancenter-statefulset.yaml#L58 says use port 8080 or 8443 for the health URL
The health URL is on 8081, https://docs.hazelcast.org/docs/management-center/3.12.5/manual/html/index.html#enabling-health-check-endpoint as always by HTTP not HTTPS
runAsUser
is always set?fsgroup
is not part of SecurityContext. see the docRunAsGroup
field?I get
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 2m14s (x17 over 4m57s) replicaset-controller Error creating: pods "hazelcast-mancenter-5d5bcfd7d7-" is forbidden: unable to validate against any pod security policy: [spec.securityContext.fsGroup: Invalid value: []int64{100100}: group 100100 must be in the ranges: [{1 65535}]]
when attempting to install the mancenter
For some development improvement we are using various namespace for various environment.
In the case we can't install only one hazelcast release, every other deployment lead to next error:
helm upgrade --install --wait hazelcast hazelcast/hazelcast --set mancenter.persistence.enabled=true,mancenter.ingress.enabled=true,mancenter.ingress.hosts={hazelcast-uat.company.com},mancenter.ingress.annotations."kubernetes\.io/ingress\.class=nginx",cluster.memberCount=3,mancenter.service.type=ClusterIP --namespace=system-uat
Release "hazelcast" does not exist. Installing it now.
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "hazelcast" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "system-uat": current value is "system-test"
I believe that the ability to install multiple releases in the same kubernetes cluster(but different ns) is a must for any charts.
Most of the Enterprise license keys include Management_Center feature so we can use same license key for both deployments to prevent additional configuration step(from Management Center dashboard).
glusterfs appends glusterfs-dynamic-
to the storage service name, 18 characters.
https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/glusterfs/glusterfs.go#L72
The service name has a max length of 63, as per the error below.
Consider trimming the PVC metadata.name to a maximum of 45 characters.
# kubectl -n dacleyra get pvc dacleyra-hazelcast-hazelcast-enterprise-mancenter -o=yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/glusterfs
creationTimestamp: 2019-05-15T21:51:47Z
finalizers:
- kubernetes.io/pvc-protection
labels:
app: hazelcast-enterprise
chart: hazelcast-enterprise-1.0.1
heritage: Tiller
release: dacleyra-hazelcast
name: dacleyra-hazelcast-hazelcast-enterprise-mancenter
namespace: dacleyra
resourceVersion: "81471163"
selfLink: /api/v1/namespaces/dacleyra/persistentvolumeclaims/dacleyra-hazelcast-hazelcast-enterprise-mancenter
uid: a313b277-775b-11e9-ac0c-6cae8b1be502
spec:
accessModes:
- ReadWriteOnce
dataSource: null
resources:
requests:
storage: 8Gi
storageClassName: glusterfs
status:
phase: Pending
# kubectl -n dacleyra describe pvc dacleyra-hazelcast-hazelcast-enterprise-mancenter
Name: dacleyra-hazelcast-hazelcast-enterprise-mancenter
Namespace: dacleyra
StorageClass: glusterfs
Status: Pending
Volume:
Labels: app=hazelcast-enterprise
chart=hazelcast-enterprise-1.0.1
heritage=Tiller
release=dacleyra-hazelcast
Annotations: volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/glusterfs
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 3m (x429 over 17h) persistentvolume-controller Failed to provision volume with StorageClass "glusterfs": failed to create volume: failed to create endpoint/service dacleyra/glusterfs-dynamic-dacleyra-hazelcast-hazelcast-enterprise-mancenter: error creating service: Service "glusterfs-dynamic-dacleyra-hazelcast-hazelcast-enterprise-mancenter" is invalid: metadata.name: Invalid value: "glusterfs-dynamic-dacleyra-hazelcast-hazelcast-enterprise-mancenter": must be no more than 63 characters
When you deploy hazelcast chart to AWS EKS, the instructions provided in INDEX.txt was not correct for Management Center so I had to find out MC URL by executing describe command.
please see ``LoadBalancer Ingress` below.
$ kubectl describe svc dining-serval-hazelcast-enterprise-mancenter
Name: dining-serval-hazelcast-enterprise-mancenter
Namespace: default
Labels: app=hazelcast-enterprise
chart=hazelcast-enterprise-1.0.1
heritage=Tiller
release=dining-serval
Annotations: <none>
Selector: app=hazelcast-enterprise,release=dining-serval,role=mancenter
Type: LoadBalancer
IP: 10.100.98.225
LoadBalancer Ingress: a539e0f709f3111e8b9d00af5b0ce326-465655592.us-west-2.elb.amazonaws.com
Port: mancenterport 8080/TCP
TargetPort: mancenter/TCP
NodePort: mancenterport 31877/TCP
Endpoints: 192.168.246.76:8080
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 6m service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 6m service-controller Ensured load balancer
Hi,
I'm deploying the Hazelcast Helm chart version 3.4.3 (https://hazelcast-charts.s3.amazonaws.com/) on an EKS Kubernetes cluster.
To avoid another issue related to adminCredentialsSecretName helm chart property I have manually created a pvc and indicated in the values of mancenter to enable persistence and use that pvc.
The first deployment of man center is working without problems, it uses that pvc and binds the pod to it and when I enter it works.
The issue occurs on the following times the pod is created, as the config is stored in the pv created by the pvc, this error appears and man center never starts again.
using automatic sizing of heap size by up to 80% of available memory and starting with container support executing command specified by MC_INIT_CMD for container initialization ERROR: Could not lock home directory. Make sure that Management Center web application is stopped (offline) before starting this command. If you are sure the application is stopped, it means that lock file was not deleted properly. Please delete 'mc.lock' file in the home directory manually before using the command. To see the full stack trace, re-run with the -v/--verbose option.
Said that, here below I provide the config values applied to the management center deployment in this chart.
Ingress host and tls values are also configured but omitted here.
This is the manually created pvc config
Storage class (default) is an EBS GP2
Seems like a command to delete mc.lock file is needed as man center is not working, therefore I consider it stopped.
After a successfully first deploy, then after redploying the management center, it will fail with the following log message:
executing command specified by MC_INIT_CMD
ERROR: Could not add new Cluster Config. Reason: Cluster config dev already exists!
To see the full stack trace, re-run with the -v/--verbose option.
Expected behavior is that restart will work fine.
If persistence is disabled, it is working fine. But then I have to create master password every time management center is restarted.
Helm version:
version.BuildInfo{Version:"v3.0.3", GitCommit:"ac925eb7279f4a6955df663a0128044a8a6b7593", GitTreeState:"clean", GoVersion:"go1.13.6"}
Hazelcast chart version: hazelcast-3.0.5
App version: 4.0
Kubectl version:
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:14:22Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:22:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
values.yaml:
mancenter:
persistence:
storageClass: nfs-client
service:
type: NodePort
Run kubesec.io on hazelcast/hazelcast
and hazelcast/hazelcast-enterprise
Helm Charts and fix the obvious comments.
kubectl plugin scan statefulset/pioneering-zebra-hazelcast
scanning statefulset pioneering-zebra-hazelcast
kubesec.io score: 3
Advise1. .spec .volumeClaimTemplates[] .spec .accessModes | index("ReadWriteOnce")
Force the running image to run as a non-root user to ensure least privilege
Reducing kernel capabilities available to a container limits its attack surface
An immutable root filesystem can prevent malicious binaries being added to PATH and increase attack cost
Run as a high-UID user to avoid conflicts with the host's user table
This might be a possible duplicate of #86
Deployed Hazelcast with helm to IKS Cluster.
helm version --short
v3.0.2+g19e47ee
helm ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
hazelcast default 1 2020-01-21 17:44:03.50896 -0500 EST deployed hazelcast-2.10.0 3.12.4
kubectl logs hazelcast-0
########################################
# JAVA_OPTS=-Dhazelcast.mancenter.enabled=false -Djava.net.preferIPv4Stack=true -Djava.util.logging.config.file=/opt/hazelcast/logging.properties -Dhazelcast.config=/data/hazelcast/hazelcast.yaml -DserviceName=hazelcast -Dnamespace=default -Dhazelcast.mancenter.enabled=true -Dhazelcast.mancenter.url=http://hazelcast-mancenter:8080/hazelcast-mancenter -Dhazelcast.shutdownhook.policy=GRACEFUL -Dhazelcast.shutdownhook.enabled=true -Dhazelcast.graceful.shutdown.max.wait=600
# CLASSPATH=/opt/hazelcast/*:/opt/hazelcast/lib/*
# starting now....
########################################
+ exec java -server -Dhazelcast.mancenter.enabled=false -Djava.net.preferIPv4Stack=true -Djava.util.logging.config.file=/opt/hazelcast/logging.properties -Dhazelcast.config=/data/hazelcast/hazelcast.yaml -DserviceName=hazelcast -Dnamespace=default -Dhazelcast.mancenter.enabled=true -Dhazelcast.mancenter.url=http://hazelcast-mancenter:8080/hazelcast-mancenter -Dhazelcast.shutdownhook.policy=GRACEFUL -Dhazelcast.shutdownhook.enabled=true -Dhazelcast.graceful.shutdown.max.wait=600 com.hazelcast.core.server.StartServer
Jan 21, 2020 10:44:08 PM com.hazelcast.config.AbstractConfigLocator
INFO: Loading configuration '/data/hazelcast/hazelcast.yaml' from System property 'hazelcast.config'
Jan 21, 2020 10:44:08 PM com.hazelcast.config.AbstractConfigLocator
INFO: Using configuration file at /data/hazelcast/hazelcast.yaml
Jan 21, 2020 10:44:08 PM com.hazelcast.instance.AddressPicker
INFO: [LOCAL] [dev] [3.12.5] Prefer IPv4 stack is true, prefer IPv6 addresses is false
Jan 21, 2020 10:44:08 PM com.hazelcast.instance.AddressPicker
INFO: [LOCAL] [dev] [3.12.5] Picked [172.30.199.27]:5701, using socket ServerSocket[addr=/0.0.0.0,localport=5701], bind any local is true
Jan 21, 2020 10:44:08 PM com.hazelcast.system
INFO: [172.30.199.27]:5701 [dev] [3.12.5] Hazelcast 3.12.5 (20191210 - 294ff46) starting at [172.30.199.27]:5701
Jan 21, 2020 10:44:08 PM com.hazelcast.system
INFO: [172.30.199.27]:5701 [dev] [3.12.5] Copyright (c) 2008-2019, Hazelcast, Inc. All Rights Reserved.
Jan 21, 2020 10:44:08 PM com.hazelcast.spi.impl.operationservice.impl.BackpressureRegulator
INFO: [172.30.199.27]:5701 [dev] [3.12.5] Backpressure is disabled
Jan 21, 2020 10:44:09 PM com.hazelcast.spi.discovery.integration.DiscoveryService
INFO: [172.30.199.27]:5701 [dev] [3.12.5] Kubernetes Discovery properties: { service-dns: null, service-dns-timeout: 5, service-name: hazelcast, service-port: 0, service-label: null, service-label-value: true, namespace: default, pod-label: null, pod-label-value: null, resolve-not-ready-addresses: true, use-node-name-as-external-address: false, kubernetes-api-retries: 3, kubernetes-master: https://kubernetes.default.svc}
Jan 21, 2020 10:44:09 PM com.hazelcast.spi.discovery.integration.DiscoveryService
INFO: [172.30.199.27]:5701 [dev] [3.12.5] Kubernetes Discovery activated with mode: KUBERNETES_API
Jan 21, 2020 10:44:09 PM com.hazelcast.instance.Node
INFO: [172.30.199.27]:5701 [dev] [3.12.5] Activating Discovery SPI Joiner
Jan 21, 2020 10:44:09 PM com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl
INFO: [172.30.199.27]:5701 [dev] [3.12.5] Starting 2 partition threads and 3 generic threads (1 dedicated for priority tasks)
Jan 21, 2020 10:44:09 PM com.hazelcast.internal.diagnostics.Diagnostics
INFO: [172.30.199.27]:5701 [dev] [3.12.5] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
Jan 21, 2020 10:44:09 PM com.hazelcast.core.LifecycleService
INFO: [172.30.199.27]:5701 [dev] [3.12.5] [172.30.199.27]:5701 is STARTING
Jan 21, 2020 10:44:09 PM com.hazelcast.spi.discovery.integration.DiscoveryService
INFO: [172.30.199.27]:5701 [dev] [3.12.5] Kubernetes plugin discovered availability zone: wdc04
Jan 21, 2020 10:44:09 PM com.hazelcast.kubernetes.KubernetesClient
WARNING: Cannot fetch public IPs of Hazelcast Member PODs, you won't be able to use Hazelcast Smart Client from outside of the Kubernetes network
Jan 21, 2020 10:44:14 PM com.hazelcast.internal.cluster.ClusterService
INFO: [172.30.199.27]:5701 [dev] [3.12.5]
Members {size:1, ver:1} [
Member [172.30.199.27]:5701 - 18220a66-dc88-434d-9e9a-ab2fe1fb701c this
]
Jan 21, 2020 10:44:14 PM com.hazelcast.internal.management.ManagementCenterService
INFO: [172.30.199.27]:5701 [dev] [3.12.5] Hazelcast will connect to Hazelcast Management Center on address:
http://hazelcast-mancenter:8080/hazelcast-mancenter
Jan 21, 2020 10:44:14 PM com.hazelcast.core.LifecycleService
INFO: [172.30.199.27]:5701 [dev] [3.12.5] [172.30.199.27]:5701 is STARTED
Jan 21, 2020 10:44:19 PM com.hazelcast.internal.management.ManagementCenterService
INFO: [172.30.199.27]:5701 [dev] [3.12.5] Failed to connect to: http://hazelcast-mancenter:8080/hazelcast-mancenter/collector.do
Jan 21, 2020 10:44:19 PM com.hazelcast.client.impl.ClientEngine
INFO: [172.30.199.27]:5701 [dev] [3.12.5] Applying a new client selector :ClientSelector{any}
Jan 21, 2020 10:44:45 PM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [172.30.199.27]:5701 [dev] [3.12.5] Initialized new cluster connection between /172.30.199.27:5701 and /172.30.239.251:43381
Jan 21, 2020 10:44:52 PM com.hazelcast.internal.cluster.ClusterService
INFO: [172.30.199.27]:5701 [dev] [3.12.5]
Members {size:2, ver:2} [
Member [172.30.199.27]:5701 - 18220a66-dc88-434d-9e9a-ab2fe1fb701c this
Member [172.30.239.251]:5701 - 805fe633-c23b-43da-b5e1-7090f3e1069e
]
Jan 21, 2020 10:45:22 PM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [172.30.199.27]:5701 [dev] [3.12.5] Initialized new cluster connection between /172.30.199.27:5701 and /172.30.199.43:40657
Jan 21, 2020 10:45:29 PM com.hazelcast.internal.cluster.ClusterService
INFO: [172.30.199.27]:5701 [dev] [3.12.5]
Members {size:3, ver:3} [
Member [172.30.199.27]:5701 - 18220a66-dc88-434d-9e9a-ab2fe1fb701c this
Member [172.30.239.251]:5701 - 805fe633-c23b-43da-b5e1-7090f3e1069e
Member [172.30.199.43]:5701 - 3b36c99b-7df8-4278-9b6c-2d58db0d05ba
]
Jan 21, 2020 10:46:24 PM com.hazelcast.internal.management.ManagementCenterService
INFO: [172.30.199.27]:5701 [dev] [3.12.5] Failed to pull tasks from Management Center
Visiting http://$MANCENTER_IP:8080/hazelcast-mancenter
gives
Not sure where to look for debugging.
I see from this link that we can define 4 parameters for cpu/memory but README only mentions resources
with a default value nil
https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container
resources.limits.cpu
resources.limits.memory
resources.requests.cpu
resources.requests.memory
README has to be fixed by adding all those 4 parameters with their default values.
Currently, Hazelcast Helm Chart repo i published by the Helm-Chart-release pipeline in Jenkins. However, there are the following drawbacks comparing the to the official Helm Chart repo:
Some guidelines how to create a Continuous Delivery pipeline for the Helm Charts:
We are using two different type of README at OS(.adoc) and EE(.md) charts. Using same file type would be good for development process.
By default the Pod uses the group 0 (root). We could add the parameter runAsGroup
(as reported by @mesutcelik in #38 ).
Provide the ability to pre-configure the username and password for hazelcast mancenter.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.