Code Monkey home page Code Monkey logo

camunda-8-helm-profiles's Introduction

Community Extension Lifecycle; IncubatingLicense Compatible with: Camunda Platform 8

Camunda 8 Helm Profiles

This is a Community Project that helps to install Camunda and other supporting technologies into Kubernetes.

Those who are already familiar with DevOps and Kubernetes may find it easier, and more flexible, to use the official Camunda Helm Charts along with your own methods and tools.

For those looking for more guidance, this project provides Makefiles, along with custom scripts and camunda-values.yaml files to help with:

  • Creating Kubernetes Clusters from scratch in several popular cloud providers, including Google Cloud Platform, Azure, AWS, and Kind.

  • Installing Camunda into existing Kubernetes Clusters by providing camunda-values.yaml pre-configured for specific use cases.

  • Automating common tasks, such as installing Ingress controllers, configuring temporary TLS certificates, installing Prometheus and Grafana for metrics, etc.

How is it Organized?

Each subfolder of this project is intended to support a specific (and opinionated) use case (aka "profile").

The Azure Nginx Ingress TLS profile helps to create Azure Kubernetes (AKS) cluster, install Camunda, and configure an nginx ingress with temporary tls certificates.

The AWS Nginx Ingress TLS profile helps to create an AWS Kubernetes (EKS) cluster, install Camunda, and configure an nginx ingress with temporary tls certificates.

The Google Nginx Ingress TLS profile helps to create an Google Kubernetes (GKE) cluster, install Camunda, and configure an nginx ingress with temporary tls certificates.

The metrics profile sets up a systems monitoring web dashboard using Prometheus and Grafana.

Explore the subfolders of this project fo discover more profiles. See the README.md file inside each profile for more information about the specific details.

How does it work?

Each profile contains a Makefile. These Makefiles define Make targets. Make targets use command line tools and bash scripts to accomplish the work of each profile.

For example, let's say your use case is to have a fully working Camunda 8 Environment in an Azure AKS Cluster. cd into the azure/ingress/nginx/tls directory, and run make. The Make targets found there will use the az command line tool as well as kubectl, and helm commands to do the tasks needed to create a fully functioning environment. See the Azure Nginx Ingress TLS profile for more details.

Prerequisites

Complete the following steps regardless of which cloud provider you use.

  1. Clone the Camunda 8 Helm Profiles git repository.

Note As of Nov 2022, the Camunda 8 Greenfield installation project has been deprecated. All functionality from the Greenfield has been combined into the camunda-8-helm-profiles repository (the one you are currently viewing)

  1. Verify kubectl is installed

    kubectl --help
    
  2. Verify helm is installed. Helm version must be at least 3.7.0

    helm version
    
  3. Verify GNU make is installed.

    make --version
    

Ideas for new Profiles

If you have an idea for a new profile that is not yet available here, please open a GitHub Issue.

Troubleshooting, Tips, and Tricks

Troubleshoot TLS Certificates

To check to make sure that letsencrypt has successfully issued tls certs, use the following command:

kubectl get certificaterequest --all-namespaces

Configure Kubectl to connect to an existing cluster

By default, kubectl looks for a file named config ins the $HOME/.kube directory.

As a convenience, this project provides a Makefile target to help configure kubectl connect to an existing Kubernetes environment.

Run make use-kube from inside one of the profiles to configure your kubectl appropriately.

For example, running make use-kube from inside the google/ingress/nginx/tls directory will configure your kubectl to connect to an existing GKE cluster.

Running make use-kube from an aws, or azure profile should configure kubectl appropriately.

Use custom camunda-values.yaml files

Instead of running make inside the profile folder, it's possible to use of camunda values yaml files directly with Helm using:

helm install <RELEASE NAME> camunda/camunda-platform --values <PROFILE YAML FILE>

For example, here's how to run helm using the development version of camunda-values.yaml:

helm install test-core camunda/camunda-platform --values https://raw.githubusercontent.com/camunda-community-hub/camunda-8-helm-profiles/master/development/camunda-values.yaml

Or, as another example, you might manually edit the ingress-nginx/camunda-values.yaml file, and replace 127.0.0.1 urls with your custom domain name. Then you could run the following to install camunda with ingress rules for your custom domain:

helm install test-core camunda/camunda-platform --values ingress-nginx/camunda-values.yaml

Domain Names

Use the make fqdn target defined inside the ingress-nginx.mk is used to set the fully qualified domain name for your specific environment.

The fqdn variable can be controlled by setting dnsLabel and baseDomainName in your Makefile.

To use a domain name that you own, simply set dnsLabel and baseDomainName to match. For example, if you own a domain named mydomain.com and you want to serve your Camunda environment at camunda.mydomain.com, then set the variables like so:

baseDomainName := mydomain.com
dnsLabel := camunda

Networking with nip.io

If you haven't yet provisioned your own domain name, it can be convenient to use a free service called nip.io. To use this service, set baseDomainName like this:

baseDomainName := nip.io

In this case, dnsLabel will be ignored. The make fqdn target inside the ingress-nginx.mk will attempt to find the ip address of your Load Balancer. The final fqdn will look like this: <ip-address-of-load-balancer>.nip.io.

Here is more information about how this works:

There are 2 techniques to setup networking for a Camunda 8 Environment.

  1. Serve each application using a separate domain name. For example:
identity.mydomain.com
operate.mydomain.com
tasklist.mydomain.com
optimize.mydomain.com
  1. Use a single domain, and serve each application as a different context path. For example:
mydomain.com/identity
mydomain.com/operate
mydomain.com/tasklist
mydomain.com/optimize

Kubernetes Networking is, of course, a very complicated topic! There are many ways to configure Ingress and networks. And to make things worse, each cloud provider has a slightly different flavor of load balancers and network configuration options.

For a variety of reasons, it's often convenient (and sometimes required) to access services via dns names rather than IP addresses.

Provisioning a custom domain name can be inconvenient, especially for demonstrations or prototypes.

Here's a technique using a public service called nip.io that might be useful. nip.io makes it possible to quickly and easily translate ip addresses into domain names.

nip.io provides dynamic domain names for any ip address. For example, if your ip address is 1.2.3.4, a doman name like my-domain.1.2.3.4.nip.io will resolve to ip address 1.2.3.4.

So, for example, say our Cloud provider created a Load Balancer listening on ip address 54.210.85.151. We can configure our environment to use dns names like this:

http://identity.54.210.85.151.nip.io
http://keycloak.54.210.85.151.nip.io
http://operate.54.210.85.151.nip.io
http://tasklist.54.210.85.151.nip.io

To use nip.io, set baseDomainName equal to nip.io inside your Makefile.

Otherwise, you're always welcome (and encouraged!) to provide your own domain name. To do so, simply set baseDomainName and dnsLabel to match your own domain name.

Keycloak Admin User and Password

By default, the Camunda Helm Charts configure a Keycloak Administrator user with username admin.

To retrieve the admin password from the Kubernetes secret and decode it you can run:

make keycloak-password

You should be able to authenticate to Keycload using admin as username and the password retrieved by the command above.

Keycloak requires SSL for requests from external sources

If your Kubernetes cluster does not use "private" IP addresses for internal communication, i.e. it does not resolve the internal service names to "private" IP addresses, then the first time you attempt to authenticate to keycloak, you may encounter the following error:

Keycloak ssl required

Users can interact with Keycloak without SSL so long as they stick to private IP addresses like localhost, 127.0.0.1, 10.x.x.x, 192.168.x.x, and 172.16.x.x. If you try to access Keycloak without SSL from a non-private IP address you will get an error.

This project provides a Makefile target named config-keycloak. If you run the following against an existing environment, it should fix this issue:

make config-keycloak

For more details on how to fix this issue manually see here

Troubleshooting Identity

Enable Debug logging for Identity by adding the following to camunda-values.yaml

identity:
  env:
   - name: LOGGING_LEVEL_ROOT
     value: DEBUG

camunda-8-helm-profiles's People

Contributors

allanbarklie avatar celanthe avatar chdame avatar falko avatar jothikiruthika avatar manueldittmar avatar mcalm avatar plungu avatar rob2universe avatar salaboy avatar sargastico avatar superbeagle avatar upgradingdave avatar zelldon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

camunda-8-helm-profiles's Issues

Escape all periods in helm values

--set controller.service.annotations."nginx\.ingress.kubernetes.io/ssl-redirect"="true" \
--set controller.service.annotations."cert-manager.io/cluster-issuer"="letsencrypt"

This did not work for me as-is, got this error from helm:

Error: YAML parse error on ingress-nginx/templates/controller-service.yaml: error converting YAML to JSON: yaml: line 4: did not find expected key

Fixed by changing lines to this:

	--set controller.service.annotations."nginx\.ingress\.kubernetes\.io/ssl-redirect"="true" \
	--set controller.service.annotations."cert-manager\.io/cluster-issuer"="letsencrypt"

Broker fails to start if hazelcast exporter is active

@salaboy Hi. Been following the sample values for the helm chart. And I can't get it working with the hazelcast exporter.

Caused by: java.lang.IllegalStateException: Failed to load exporter with configuration: ExporterCfg{id='hazelcast', jarPath='null', className='io.zeebe.hazelcast.exporter.HazelcastExporter', args={updatePosition=false, enabledValueTypes=JOB,WORKFLOW_INSTANCE,DEPLOYMENT,INCIDENT,TIMER,VARIABLE,MESSAGE,MESSAGE_SUBSCRIPTION,MESSAGE_START_EVENT_SUBSCRIPTION}} at io.zeebe.broker.system.partitions.ZeebePartition.<init>(ZeebePartition.java:113) at io.zeebe.broker.Broker.lambda$partitionsStep$15(Broker.java:316) at io.zeebe.broker.bootstrap.StartProcess.lambda$startStepByStep$2(StartProcess.java:60) at io.zeebe.broker.bootstrap.StartProcess.takeDuration(StartProcess.java:88) at io.zeebe.broker.bootstrap.StartProcess.startStepByStep(StartProcess.java:58) at io.zeebe.broker.bootstrap.StartProcess.takeDuration(StartProcess.java:88) at io.zeebe.broker.bootstrap.StartProcess.start(StartProcess.java:43) at io.zeebe.broker.Broker.partitionsStep(Broker.java:321)

sed command to replace ip addresses fails on linux

This sed syntax doesn't work on Linux (Gnu sed)

sed -Ei '' "s/([0-9]{1,3}\.){3}[0-9]{1,3}/$$IP/g" camunda-values.yaml ; \

sed -Ei ''

ingress-nginx.mk

sed -Ei '' "s/([0-9]{1,3}\.){3}[0-9]{1,3}/$$IP/g" camunda-values.yaml ; \

Not sure if there is a compatible way to make it (e.g. if just removing the quotes '' or it's mandatory in mac)
the pipeline way will make it

cat camunda-values.yaml | sed -E "s/([0-9]{1,3}\.){3}[0-9]{1,3}/$$IP/g" > camunda-values-updated.yaml

Google Cloud must 'gcloud' must update 'kubectl' before asking 'kubectl' to apply

It seems like target kube-gke: in kubernetes-gke.mk can risk to use 'kubectl' before 'gcloud' has set current-context.

I have changed that in PR.

Additional I have added single 'properties-local.mk' in 'configuration' directory where the user can assign properties.

properties-local.mk is now used in 'google'

My plan is to use it in more make scripts.

I would like to join the camunda-community-hub

  1. I have signed the https://cla-assistant.io/camunda-community-hub/community

  2. I get below error (because I'm not a member of camunda-community-hub)
    git push --set-upstream origin feature/54
    ERROR: Permission to camunda-community-hub/camunda-8-helm-profiles.git denied to anderskristian.
    fatal: Could not read from remote repository.

  3. So I would like to be granted access to add feature branches to this project

high-available-webapps: configure ingress session affinity

Ingresses should be configured to use sticky sessions when multiple instances of tasklist/operate/optimize are started.

Here's an example of annotations required to enable "cookie" based sticky sessions:

https://github.com/camunda-community-hub/camunda-8-helm-profiles/blob/main/tasklist/include/tasklist-ingress.tpl.yaml

If possible, it'd be good to enhance the makefile in the high-available-webapps profile to add these annotations.

cc: @ManuelDittmar

Remove `.PHONY` declarations and instead add `remake` task descriptions

This proposal might be controversial and against best practices of make, but the .PHONY declarations are mostly redundant, which makes our Makefiles harder to read and has more than once lead to inconsistencies like this:
https://github.com/camunda-community-hub/camunda-8-helm-profiles/blob/main/google/include/kubernetes-gke.mk#L31-L32

Let's remove them and instead introduce documentation in form of remake task descriptions, e.g.:

#: Install Camunda on Kubernetes
all: camunda await-zeebe rebalance-leaders benchmark

#: Create Kubernetes cluster and install Prometheus & Grafana
kube: kube-gke metrics url-grafana

They can then be extracted using remake --tasks, e.g.:

$ remake --tasks
all                  Install Camunda on Kubernetes
clean                Uninstall Camunda from Kubernetes
clean-kube           Delete entire Kubernetes cluster including metrics data
kube                 Create Kubernetes cluster and install Prometheus & Grafana
url-grafana          Get URL for Grafana
urls                 Get URLs for Kubernetes cluster

And since not everyone has remake installed we can add these to the README.md that accompanies each Helm profile. This would also help people that don't get shell completion for make.

I am already working on a PR for this.

Finish the Cloud Flare certificate (cfssl) profile

Cloud Flair PKI Toolkit can be used to provision tls certificates.

Most of the scripts needed for this can be found inside the cfssl subdirectory, but these scripts haven't been fully tested.

Need to regression test, make fixes, and finish README

Finish Azure Application Gateway Profile

As of Nov 2022, by default, we provide a profile for using an nginx ingress for Azure.

The azure/ingress/agic folder contains scripts to configure network access using an Application Gateway. However, this profile isn't working 100% yet.

It seems there's an issue with registering TLS Certificates with the App gateway? Next steps are to research and update scripts to get this working 100%.

Disable ingress creation in values file

Hello,
I'd like to use my own ingress configuration together with zeebe/zeebe-full deployed with dev profile from here.

Is there a way to disable/configure ingress creation in value files so I don't have to disable it every helm upgrade?

Multi-Region: Conflicting exporter names result in lost data

The Multi-Region profiles configure each region's Zeebe to use their local ES as exporter named "elasticsearch" and the remote ES as exporter named "elasticsearch2".

As a result, Zeebe will distribute conflicting metadata about the exporters. Zeebe brokers from region 0 share something like es1={pos=15, sequence=10},es2={pos=30, sequence=27} while Zeebe brokers from region 1 flip it and distribute es1={pos=30, sequence=27},es2={pos=15, sequence=10}.

Depending on the ordering in which updates are distributed, some leaders will run with incorrect metadata which can result in missing data in ES (i.e. dataloss) or wrong sequence numbers (i.e. data corruption).

Not able to use letsencrypt certificates for AWS Load Balancer Domain names due to length

AWS provides convenience DNS names for load balancers.

However, if you try to configure letsencrypt to genererate certificates for these domain names, you will see an exception like this:

Message:               Failed to wait for order resource "tls-secret-ltx5k-1407422140" to become ready: order is in "errored" state: Failed to create Order: 400 urn:ietf:params:acme:error:rejectedIdentifier: NewOrder request did not include a SAN short enough to fit in CN

The default configuration for lets encrypt uses the DNS name for the Common Name (CN) in the certificate.

Apparently Letsencrypt limits the length of this CN.

So, the default letsencrypt configuration fails.

It should be possible to configure letsencrypt to use a SAN that is different than the Domain name. Need to research to find how to configure this inside Kubernetes environment.

Use letsencrypt staging certificates

I tried using the letsencrypt-staging target instead of the letsencrypt-prod, but then Keycloak appeared to complain about invalid redirect uri's.

I didn't have a lot of time to research, but I'm guessing there's some extra configuration needed to allow keycloak to use staging certificates?

I set it back to use the letsencrypt-prod and that works fine.

I created this issue as a reminder to research as we have time.

Add ability to use `post-renderer` scripts

We might be able to take advantage of the helm --post-renderer feature described here.

If we use a combination of setting environment variables and post-renderer scripts, this might be a good strategy for tasks such as replacing ip addresses inside values files.

ingress-aws profile fails on windows

The await-elb target fails in a windows cygwin environment with error below.

This prevents people with windows os from using the greenfield and/or the ingress-aws profile to spin up aws environments.

C:\gitRepos\camunda-community-hub\camunda-8-helm-profiles\ingress-aws>dir
 Volume in drive C is Windows
 Volume Serial Number is 3AC2-4A62

 Directory of C:\gitRepos\camunda-community-hub\camunda-8-helm-profiles\ingress-aws

10/13/2022  13:58    <DIR>          .
10/13/2022  13:58    <DIR>          ..
10/13/2022  13:58               429 aws-ingress.sh
10/13/2022  13:58               897 ingress-aws.mk
10/13/2022  13:58               446 Makefile
10/13/2022  13:58             3,085 README.md
               4 File(s)          4,857 bytes
               2 Dir(s)  101,784,301,568 bytes free

C:\gitRepos\camunda-community-hub\camunda-8-helm-profiles\ingress-aws>make await-elb
./aws-ingress.sh
process_begin: CreateProcess(NULL, sh C:\gitRepos\camunda-community-hub\camunda-8-helm-profiles\ingress-aws\aws-ingress.sh, ...) failed.
make (e=2): The system cannot find the file specified.
make: *** [ingress-aws.mk:15: await-elb] Error 2

Expected Storage Classes (documentation and minimising developer profile dependencies)

I think this is a great initiative.
I'm initially interested in the developer profile.

I'm using Docker Desktop, the Kubernetes cluster by default only has the Storage Class 'hostpath'
I note that the developer profile relies on the existence of both 'hostpath' and 'standard'

At least one other storage class is introduced in the other profiles 'ssd'
It would be good to document these and consider if the developer profile should/could only need 'hostpath' by default.

Support https for ingress-nginx

Currently the links inside this ingress-nginx values file are set to http.

There's currently no automatic way to change those links from http to https. It's possible to manually change then, but it would be nice to have an option (maybe in the makefile?) to use https links.

high-available-webapps: disable webapp in existing deployment

The Makefile for the high-available-webapps profile copies the current deployments and then creates a new deployment containing only the webapp.

The existing deployment still contains webapp + importers & exporters.

It might be nice to also update the existing deployment so that the webapp is disabled. That way, there'd be a clean separation where one deployment has only webapps, and the other deployment contains only importer/exporters

cc: @ManuelDittmar

`make await-zeebe` returns too early during k8s autoscaling

12 brokers expected but:

kubectl wait --for=condition=Ready pod -n camunda -l app.kubernetes.io/name=zeebe --timeout=900s
W1024 13:55:45.078086  254293 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
pod/camunda-zeebe-0 condition met
pod/camunda-zeebe-1 condition met

zeebe-full with dev profile on docker for windows k8s cluster

Hello, I'm trying to start the full helm chart for zeebe on my dev machine with the following setup:

  • OS: Windows 10 20H2 (WSL2 activated)
  • Docker for Windows 20.10.0
  • Kubernetes (activated in Docker For Windows): 1.19.3
  • Helm: v3.4.2

I also applied the changes on the k8s objects mentioned in helm-service-account-role (as specified in https://docs.zeebe.io/kubernetes/installing-helm.html)
Obs.: when running the kubectl --apply ... I had warnings that the v1beta1 version of the api (apiVersion: rbac.authorization.k8s.io/v1beta1) does not exists so it switch back to V1

the installation was done using the dev profile from this git repository
helm install my-zeebe zeebe/zeebe-full -- values https://raw.githubusercontent.com/zeebe-io/zeebe-helm-profiles/master/zeebe-dev-profile.yaml

And here the problems started (there are 2 of them)

  1. Even if I was using the dev-profile values file it was still trying to install 3 instances of the zeebe broker and 3 elasticsearch instances.
    This I figured out is because the names of the charts inside the zeebe-dev-profile.yaml are not corresponding to the ones in the latest version of the zeebe-full chart, so I passed over by downloading the yaml file and do the following corrections:
    change zeebe-cluster-helm to zeebe
    change zeebe-operate-helm to operate
    change zeebe-tasklist-helm to tasklist
    change zeebe-zeeqs-helm to zeeqs

After doing the changes and run the helm install with my local values yaml file Pb 1 from above was solved so that now it was 1 instance of elastic and one instance of zeebe broker deployed, but now I have another problem
2) The zeebe borker instance is not starting

NAME READY STATUS RESTARTS AGE
elasticsearch-master-0 1/1 Running 0 2m26s
my-nginx-ingress-controller-5697f87674-pngpp 1/1 Running 0 2m26s
my-nginx-ingress-default-backend-fd5c8b6d4-l6vd8 1/1 Running 0 2m26s
my-operate-5775bdd89d-bxlvq 1/1 Running 0 2m26s
my-zeebe-0 0/1 Running 0 2m26s
my-zeebe-gateway-7d84fcb8b4-knnz7 1/1 Running 0 2m26s

So looking in the logs of the pod I see this
2020-12-29 12:26:30.048 [] [main] INFO io.zeebe.broker.system - Bootstrap Broker-0 [6/10]: cluster services
2020-12-29 12:26:35.342 [] [raft-server-0-raft-partition-partition-1] WARN io.atomix.raft.roles.FollowerRole - RaftServer{raft-partition-partition-1}{role=FOLLOWER} - Poll request to 2 failed: java.net.ConnectException: Expected to send
a message with subject 'raft-partition-partition-1-poll' to member '2', but member is not known. Known members are '[Member{id=my-zeebe-gateway-7d84fcb8b4-knnz7, address=10.1.0.81:26502, properties={event-service-topics-subscribed=HwEBAwFqb2JzQXZhaWxhYmzl}}, Member{id=0, address=my-zeebe-0.my-zeebe.default.svc.cluster.local:26502, properties={}}]'.
2020-12-29 12:26:35.342 [] [raft-server-0-raft-partition-partition-1] WARN io.atomix.raft.roles.FollowerRole - RaftServer{raft-partition-partition-1}{role=FOLLOWER} - Poll request to 1 failed: java.net.ConnectException: Expected to send
a message with subject 'raft-partition-partition-1-poll' to member '1', but member is not known. Known members are '[Member{id=my-zeebe-gateway-7d84fcb8b4-knnz7, address=10.1.0.81:26502, properties={event-service-topics-subscribed=HwEBAwFqb2JzQXZhaWxhYmzl}}, Member{id=0, address=my-zeebe-0.my-zeebe.default.svc.cluster.local:26502, properties={}}]'.

Somehow raft is still expecting to have 3 members in the cluster to monitor not 1 as is indicated in the dev profile value file (clusterSize: 1) and is trying to pool the followers which of course does not exists.

So, if for the first problem (wrong naming in the dev profile) I could overcome by having a local file with correct naming, the second one I'm not able to figure out from where it is even if I activated the debug level for the logging in the zeebe broker.

Regards,
ionut

Use `gatewayAddress` instead of `brokerContactPoint` in Operate configuration

In Operate configuration to connect to Zeebe the setting camunda.operate.zeebe.brokerContactPoint is used for example here:

brokerContactPoint: "camunda-zeebe-gateway:26500"

This setting is deprecated and should be replaced by camunda.operate.zeebe.gatewayAddress
The setting was renamed in order to make clear that only that gateway is contacted by Operate.

See also offical documentation

ingress-nginx: `make clean` fails with sed error

Probably due to sed behaving differently on Mac.

make[1]: [../include/camunda.mk:40: clean-camunda] Error 1 (ignored)
sed -Ei '' "s/([0-9]{1,3}\.){3}[0-9]{1,3}/127.0.0.1/g" camunda-values.yaml
sed: can't read s/([0-9]{1,3}\.){3}[0-9]{1,3}/127.0.0.1/g: No such file or directory
make[1]: *** [../include/ingress-nginx.mk:33: clean-ingress-ip] Error 2
make[1]: Leaving directory '/home/falko/git/camunda-8-helm-profiles/ingress-nginx'
make: *** [Makefile:29: clean-ingress-nginx-camunda] Error 2

Zeebe is not access K8s (Kind) using zeebe-full-helm-1.1.4

Dear Sir,

In linux server, Installed Kind, Kubectl and downloaded zeebe-full-helm-1.1.4 then unzip in local. I am able to see running status in K8s for elasticsearch,zeebe and ingress-controller, but don't who how to access the zeebe and operate url in the same linux server (or) outside linux server. Please help?

I believe below problems

  1. why "jpzeebe-ingress-nginx-controller" not able to publish External IP, right now it shows as "Pending" status?
  2. service: jpzeebe-ingress-nginx-controller-admission is not showing in the pods, why it is not showing?
  3. is it possible to accessible without ingress ?

First time tried:

helm install jpzeebe ./zeebe-full-helm-1.1.4 --values ./zeebe-dev-profile.yaml

Error-1:

W1014 13:06:45.197973 2406998 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1 .25+; use policy/v1 PodDisruptionBudget
W1014 13:07:08.854612 2406998 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
Error: StatefulSet.apps "elasticsearch-master" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"app":"zeebe", "chart":"elasticsearch", "release":"jpzeebe"}: selector does not match template labels

Second time tried by commenting below line in zeebe-dev-profile.yaml:

labels: { "app" : "zeebe" }

helm install jpzeebe ./zeebe-full-helm-1.1.4 --values ./zeebe-dev-profile.yaml

kubectl get pods

NAME READY STATUS RESTARTS AGE
elasticsearch-master-0 1/1 Running 0 2m3s
jpzeebe-ingress-nginx-controller-8d86c54d7-hql6j 1/1 Running 0 2m3s
jpzeebe-zeebe-0 1/1 Running 0 2m3s
jpzeebe-zeebe-gateway-79b54754b7-bg2kv 1/1 Running 0 2m3s
jpzeebe-zeebe-operate-helm-68c9d8dfbc-qtxdz 1/1 Running 0 2m3s

kubectl get svc

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch-master ClusterIP 10.xx.165.1 9200/TCP,9300/TCP 2m12s
elasticsearch-master-headless ClusterIP None 9200/TCP,9300/TCP 2m12s
jpzeebe-ingress-nginx-controller LoadBalancer 10.xx.94.19 80:30057/TCP,443:31852/TCP 2m11s
jpzeebe-ingress-nginx-controller-admission ClusterIP 10.xx.225.234 443/TCP 2m12s
jpzeebe-zeebe ClusterIP None 9600/TCP,26502/TCP,26501/TCP 2m12s
jpzeebe-zeebe-gateway ClusterIP 10.xx.28.255 9600/TCP,26500/TCP 2m12s
jpzeebe-zeebe-operate-helm ClusterIP 10.xx.38.60 80/TCP 2m12s
kubernetes ClusterIP 10.xx.0.1 443/TCP 5m18s
[root@master-node MTU]#

FYI, download https://raw.githubusercontent.com/zeebe-io/zeebe-helm-profiles/master/zeebe-dev-profile.yaml

Environment:

kind version

kind v0.11.1 go1.16.4 linux/amd64

helm version

version.BuildInfo{Version:"v3.6.0", GitCommit:"7f2df6467771a75f5646b7f12afb408590ed1755", GitTreeState:"clean", GoVersion:"go1.16.3"}

kubectl version

Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:30:33Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-05-21T23:01:33Z", GoVersion:"go1.16.4", Compiler:"gc", Platform:"linux/amd64"}

Add docker-compose files for dependabot

Dependabot doesn't recognize image versions in values.yaml files, but it would recognize versions inside docker-compose files.

We could experiment with creating docker-compose files so that dependabot will warn when new versions are available.

Add ingress example for OpenShift using Routes

According to the OpenShift documentation, simply creating an ingress object with the appropriate endpoints would auto-generate Route objects, without requiring any ingress controller/class to be defined.

We haven't tried it, but it would be nice to provide a small example for how to do that, perhaps with TLS enabled as well as it seems pretty easy to do in OpenShift.

high-available-webapps: change shard replicas to 1

If you have 3 nodes and set 2 replicas, the cluster status will turn yellow if one node goes down. This is because, with 2 replicas, Elasticsearch expects three copies of the data (one primary and two replicas). If one node fails, it cannot maintain all three copies, hence the yellow state indicating that some replicas are not allocated.

When having a 3-node Elasticsearch cluster and setting 1 replica per index would be a sensible configuration.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.