Code Monkey home page Code Monkey logo

hcl-workload-automation-chart's Introduction

HCL Workload Automation

Introduction

You can use Amazon Elastic Kubernetes Service (EKS) to run HCL Workload Automation containers on Amazon Web Services (AWS).

HCL Workload Automation is a complete, modern solution for batch and real-time workload management. It enables organizations to gain complete visibility and control over attended or unattended workloads. From a single point of control, it supports multiple platforms and provides advanced integration with enterprise applications including ERP, Business Analytics, File Transfer, Big Data, and Cloud applications.

As more and more organizations move their critical workloads to the cloud, there is an increasing demand for solutions and services that help them easily migrate and manage their cloud environment.

To respond to the growing request to make automation opportunities more accessible, HCL Workload Automation can be deployed on the Amazon Web Services cloud.

The information in this README contains the steps for deploying the following HCL Workload Automation components using a chart and container images:

HCL Workload Automation, which comprises master domain manager and its backup, Dynamic Workload Console, and Dynamic Agent

For more information about HCL Workload Automation, see the product documentation library in HCL Workload Automation Infocenter.

Details

By default, a single server (master domain manager), Dynamic Workload Console (console) and dynamic agent is installed.

To achieve high availability in an HCL Workload Automation environment, the minimum base configuration is composed of 2 Dynamic Workload Consoles and 2 servers (master domain managers). For more details about HCL Workload Automation and high availability, see:

An active-active high availability scenario.

HCL Workload Automation can be deployed across a single cluster, but you can add multiple instances of the product components by using a different namespace in the cluster. The product components can run in multiple failure zones in a single cluster.

In addition to the product components, the following objects are installed:

Agent Console Server (MDM)
Deployments
Pods wa-waagent-0 wa-waconsole-0 wa-waserver-0
Stateful Sets wa-waagent for dynamic agent wa-waconsole wa-waserver
Secrets wa-pwd-secret wa-pwd-secret
Certificates (Secret) wa-waagent wa-waserver wa-waserver
Network Policy da-network-policy dwc-network-policy mdm-network-policy
allow-mdm-to-mdm-network-policy
Services wa-waagent-h wa-waconsole wa-waconsole-h wa-waserver wa-waserver-h
PVC (generated from Helm chart). Default deployment includes a single (replicaCount=1) server, console, agent. Create a PVC for each instance of each component. 1 PVC data-wa-waagent-waagent0 1 PVC data-wa-waconsole-waconsole0 1 PVC data-wa-waserver-waserver0
PV (Generated by PVC) 1 PV 1 PV 1 PV
Service Accounts
Roles wa-pod-role wa-pod-role wa-pod-role
Role Bindings wa-pod-role-binding wa-pod-role-binding wa-pod-role-binding
Cluster Roles {{ .Release.Namespace }}-wa-pod-cluster-role-get-routes (name of clusterRole and, where {{ .Release.Namespace }} represents the name of the namespace <workload_automation_namespace>) {{ .Release.Namespace }}-wa-pod-cluster-role-get-routes {{ .Release.Namespace }}-wa-pod-cluster-role-get-routes
Cluster Role Bindings {{ .Release.Namespace }}-wa-pod-cluster-role-get-routes-binding (name of ClusterRoleBinding and, where {{ .Release.Namespace }} represents the name of the namespace \<workload_automation_namespace>) {{ .Release.Namespace }}-wa-pod-cluster-role-get-routes-binding {{ .Release.Namespace }}-wa-pod-cluster-role-get-routes-binding
Ingress or Load Balancer Depends on the type of network enablement that is configured. See Network enablement

Data encryption:

  • Data in transit encrypted using TLS 1.2
  • Data at rest encrypted using passive disk encryption.
  • Secrets are stored in an approved Kubernetes Secrets.
  • Logs are clear of all sensitive information.

Supported Platforms

  • Amazon Elastic Kubernetes Service (EKS) on amd64: 64-bit Intel/AMD x86

Accessing the container images

You can access the HCL Workload Automation chart and container images from the Entitled Registry. See Create the secret for more information about accessing the registry. The images are as follows:

  • hclcr.io/wa/hcl-workload-automation-agent-dynamic:9.5.0.03.20201228
  • hclcr.io/wa/hcl-workload-automation-server:9.5.0.03.20201228
  • hclcr.io/wa/hcl-workload-automation-console:9.5.0.03.20201228

Prerequisites

Before you begin the deployment process, ensure your environment meets the following prerequisites:

  • Amazon Kubernetes Service (EKS) installed and running
  • aws (command line)
  • Helm 3.0
  • OpenSSL
  • Grafana and Prometheus for monitoring dashboard
  • Jetstack cert-manager
  • Ingress controller: to manage the ingress service, ensure an ingress controller is corrected configured. For example, to configure an NGINX ingress controller, refer to NGINX Ingress Controller.
  • Kubernetes version: >=1.15 (no specific APIs need to be enabled)
  • kubectl command-line tool to control Kubernetes clusters
  • API key for accessing HCL Entitled Registry: hcl.cr.io

Storage classes static PV and dynamic provisioning

Provider Disk Type PVC Size PVC Access Mode
AWS EBS GP2 SSD Default ReadWriteOnce
AWS EBS IO1 SSD Default ReadWriteOnce

For more details about the storage requirements for your persistent volume claims, see the Storage section of this README file. For additional details about AWS storage settings, see Storage classes.

Resources Required

The following resources correspond to the default values required to manage a production environment. These numbers might vary depending on the environment.

Component Container resource limit Container memory request
Server CPU: 4, Memory: 16Gi CPU: 1, Memory: 6Gi, Storage: 10Gi
Console CPU: 4, Memory: 16Gi CPU: 1, Memory: 4Gi, Storage: 5Gi
Dynamic Agent CPU: 1, Memory: 2Gi CPU: 200m, Memory: 200Mi, Storage size: 2Gi

Installing

Installing and configuring HCL Workload Automation, involves the following high-level steps:

  1. Create the Namespace.
  2. Creating a Kubernetes Secret by accessing the entitled registry to store an entitlement key for the HCL Workload Automation offering on your cluster.
  3. Securing communication using either Jetstack cert-manager or using your custom certificates.
  4. Creating a secrets file to store passwords for the console and server components, or if you use custom certificates, to add your custom certificates to the Certificates Secret.
  5. Deploying the product components.
  6. Verifying the installation.

Create the Namespace

To create the namespace, run the following command:

    kubectl create namespace <workload_automation_namespace>

Create the Secret

If you already have a license then you can proceed to obtain your entitlment key. To learn more about acquiring an HCL Workload Automation license, contact [email protected].

Obtain your entitlement key and store it on your cluster by creating a Kubernetes Secret. Using a Kubernetes secret allows you to securely store the key on your cluster and access the registry to download the chart and product images.

  1. Access the entitled registry.

Contact your HCL sales representative for the login details required to access the HCL Entitled Registry.

  1. To create a pull secret for your entitlement key that enables access to the entitled registry, run the following command:

      kubectl create secret docker-registry -n <workload_automation_namespace> sa-<workload_automation_namespace> --docker-server=<registry_server> --docker-username=<user_name> --docker-password=<password>
    

    where,

    • <workload_automation_namespace> represents the namespace where the product components are installed
    • <registry_server> is hclcr.io
    • <user_name> is the user name provided by your HCL representative
    • <password> is the entitled key copied from the entitled registry <api_key>

Securing communication

You secure communication using certificates. You can manage certificates using either the Jetstack cert-manager or using your own custom certificates. For information about using your own certificates, see the section Configuring. For more information about Jetstack cert-manager, see the cert-manager documentation.

Cert-manager is a Kubernetes addon that automates the management and issuance of TLS certificates. It verifies periodically that certificates are valid and up-to-date, and takes care of renewing them before they expire.

  1. Create the namespace for cert-manager.

     kubectl create namespace cert-manager
    
  2. Install cert-manager using a Helm chart by running the following commands:

    a. helm repo add jetstack https://charts.jetstack.io

    b. helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v0.15.1 --set installCRDs=true

  3. Create the Certificate Authority (CA) by running the following commands:

    a. .\openssl.exe genrsa -out ca.key 2048

    b. .\openssl.exe req -x509 -new -nodes -key ca.key -subj "/CN=WA_ROOT_CA" -days 3650 -out ca.crt

  4. Create the CA key pair secret by running the following command:

    kubectl create secret tls ca-key-pair --cert=ca.crt --key=ca.key -n <workload_automation_namespace>
    
  5. Create the Issuer under the namespace. Edit the issuer.yaml file with the namespace and CA key pair.

    a. Create the issuer.yaml as follows, specifying the namespace and CA key pair:

     apiVersion: cert-manager.io/v1alpha2
     kind: Issuer
     metadata:
       labels:
         app.kubernetes.io/name: cert-manager
       name: wa-ca-issuer
       namespace: <workload_automation_namespace>
     spec:
       ca:
         secretName: ca-key-pair
    

    b. Run the following command to create the issuer under the namespace:

     kubectl apply -f issuer.yaml -n <workload_automation_namespace>
    

Creating a secrets file

Create a secrets file to store passwords for the server, console and database, or if you use custom certificates, to add your custom certificates to the Certificates Secret.

Create secrets file to store passwords for the console and server components
  1. Manually create a mysecret.yaml file to store passwords. The mysecret.yaml file must contain the following parameters:

     apiVersion: v1
     kind: Secret
     metadata:
       name: wa-pwd-secret
       namespace: <workload_automation_namespace>
     type: Opaque
     data:
        WA_PASSWORD: <hidden_password>
        DB_ADMIN_PASSWORD: <hidden_password>
        DB_PASSWORD: <hidden_password>	
    

where:

  • wa-pwd-secret is the value of the pwdSecretName parameter defined in the Configuration Parameters section;
  • <workload_automation_namespace> is the namespace where you are going to deploy the HCL Workload Automation product components.
  • <hidden_password> must be entered; to enter an encrypted password, run the following command in a UNIX shell and copy the output into the yaml file: echo -n 'mypassword' | base64

Note: The echo command must be launched separately for each password that you want to enter as an encrypted password in the mysecret.yaml:

  • WA_PASSWORD: <hidden password>
  • DB_ADMIN_PASSWORD: <hidden password>
  • DB_PASSWORD: <hidden password>
  1. Once the file has been created and filled in, it must be imported.

    a. From the command line, log in to the AWS EKS cluster.

    b. Apply the WA-Secret (wa-pwd-secret). Create a secrets file to store passwords for both the server and console components. Launch the following command:

     kubectl apply -f <my_path>/mysecret.yaml -n <workload_automation_namespace>
    

where <my_path> is the location path of the mysecret.yaml file.

Deploying the product components

To deploy the HCL Workload Automation components, ensure you have first downloaded the chart from the HCL Entitled Registry: hclcr.io and have unpacked it to a local directory. If you already have the chart then update it.

  1. Download the chart from the repository and unpack it to a local directory or, if you already have the chart, update it.

    First time installation and configuration of the chart:

    a. Add the repository:

     helm repo add <image_repository> https://hclcr.io/chartrepo/wa --username <username> --password <api_key> , where <image_repository> represents a folder of your choice
    

    b. Update the Helm chart:

    helm repo update 
    

    c. Pull the Helm chart:

     `helm pull workload/hcl-workload-automation-prod`
    

    where, workload/hcl-workload-automation-prod, represents the chosen image repository

Update your chart:

    helm repo update 	 
  1. Customize the deployment. Configure each product component by adjusting the values in the values.yaml file. See these parameters and their default values in Configuration Parameters. By default a single server, console, and agent is installed.

Note: If you specify the waconsole.engineHostName and waconsole.enginePort parameters in the values.yaml file, only a single engine connection related to an engine external to the cluster is automatically defined in the Dynamic Workload Console using the values assigned to these parameters. By default, the values for these parameters are blank, and the server is deployed within the cluster and the engine connection is related to the server in the cluster. If, instead, you deploy both a server within the cluster and one external to the cluster, a single engine connection is automatically created in the console using the values of the parameters related to the external engine (server). If you require an engine connection to the server deployed within the cluster, you must define the connection manually.

  1. Deploy the instance by running the following command:

     helm install -f values.yaml <workload_automation_release_name> workload/hcl-workload-automation-prod -n <workload_automation_namespace>
    

where, <workload_automation_release_name> is the deployment name of the instance. TIP: Because this name is used in the server component name and the pod names, use a short name or acronym when specifying this value to ensure it is readable.

The following are some useful Helm commands:

  • To list all of the Repo releases:

      helm list -A
    
  • To update the Helm release:

      helm upgrade <workload_automation_release_name> workload/hcl-workload-automation-prod -f values.yaml -n <workload_automation_namespace>
    
  • To delete the Helm release:

      helm uninstall <workload_automation_release_name> -n <workload_automation_namespace>
    

Verifying the installation

After the deployment procedure is complete, you can validate the deployment to ensure that everything is working.

To manually verify that the installation was successfully installed, you can perform the following checks:

  1. Run the following command to verify the pods installed in the <workload_automation_namespace>:

        kubectl get pods -n <workload_automation_namespace>
    
  2. Locate the master pod name that is in the format <workload_automation_release_name>-wauser-0.

  3. To access the master pod, open a bash shell and run the following command:

     kubectl exec -ti <workload_automation_release_name>-waserver-0 -n <workload_automation_namespace> -- /bin/bash
    
  4. Access the HCL Workload Automation pod and run the following commands:

    a. Composer list workstation: lists the workstation definitions in the database

     composer li cpu=/@/@
    

    An example of the output for this command is as follows:

listcpu output

b. Conman Showpus: lists all workstations in the plan

    conman sc /@/@

An example of the output for this command is as follows:

showcpu output

c. Global option command optman ls:

    optman ls

This command lists the current values of all HCL Workload Automation global options. For more information about the global options see Global Options - detailed description.

  • Verify that the default engine connection is created from the Dynamic Workload Console

Verifying the default engine connection depends on the network enablement configuration you implement. To determine the URL to be used to connect to the console, follow the procedure for the appropriate network enablement configuration.

For load balancer:

  1. Run the following command to obtain the token to be inserted in https://<loadbalancer>:9443/console to connect to the console:

     kubectl get svc <workload_automation_release_name>-waconsole-lb  -o 'jsonpath={..status.loadBalancer.ingress..hostname}' -n <workload_automation_namespace>
    
  2. With the output obtained, replace <loadbalancer> in the URL https://<loadbalancer>:9443/console.

For ingress:

  1. Run the following command to obtain the token to be inserted in https://<ingress>/console to connect to the console:

     kubectl get ingress/<workload_automation_release_name>-waconsole -o 'jsonpath={..host}'-n <workload_automation_namespace>
    
  2. With the output obtained, replace <ingress> in the URL https://<ingress>/console.

Logging into the console:

  1. Log in to the console by using the URLs obtained in the previous step.

  2. For the credentials, specify the user name (wauser) and password (wa-pwd-secret, the passwrod specified when you created the secrets file to store passwords for the server, console and database).

  3. From the navigation toolbar, select Administration -> Manage Engines.

  4. Verify that the default engine, engine_wa-waserver is dsplayed in the Manage Engines list:

engine connection properties panel

Upgrading the Chart

Before you upgrade a chart, verify if there are jobs currently running and manually stop the related processes or wait until the jobs complete. To upgrade the release <workload_automation_release_name> to a new version of the chart, run the following command from the directory where the values.yaml file is located:

helm upgrade release_name workload/hcl-workload-automation-prod -f values.yaml -n <workload_automation_namespace>

Rolling Back the Chart

Before you roll back a chart, verify if there are jobs currently running and manually stop the related processes or wait until the jobs complete. To roll back the <workload_automation_release_name> release to a previous version of the chart, run the following commands:

  1. Identify the revision number to which you want to roll back by running the command:

    helm history <workload_automation_release_name> -n <workload_automation_namespace>
    
  2. Roll back to the specified revision number:

    helm rollback <workload_automation_release_name>  <revision-number> -n <workload_automation_namespace>
    

Uninstalling the Chart

To uninstall the deployed components associated with the chart and clean up the orphaned Persistent Volumes, run:

  1. Uninstall the hcl-workload-automation-prod deployment, run:

    helm uninstall release_name -n <workload_automation_namespace> 
    

    The command removes all of the Kubernetes components associated with the chart and uninstalls the <workload_automation_release_name> release.

  2. Clean up orphaned Persistent Volumes by running the following command:

    kubectl delete pvc -l <workload_automation_release_name> -n <workload_automation_namespace> 
    

Configuration Parameters

The following tables list the configurable parameters of the chart, values.yaml, an example of the values and the default values. The tables are organized as follows:

ย 

  • Global parameters

The following table lists the global configurable parameters of the chart relative to all product components and an example of their values:

Parameter Description Mandatory Example Default
global.license Use ACCEPT to agree to the license agreement yes not accepted not accepted
global.enableServer If enabled, the Server application is deployed no true true
global.enableConsole If enabled, the Console application is deployed no true true
global.enableAgent If enabled, the Agent application is deployed no true true
global.serviceAccountName The name of the serviceAccount to use. The HCL Workload Automation default service account (wauser) and not the default cluster account no default ** wauser**
global.language The language of the container internal system. The supported language are: en (English), de (German), es (Spanish), fr (French), it (Italian), ja (Japanese), ko (Korean), pt_BR (Portuguese (BR)), ru (Russian), zh_CN (Simplified Chinese) and zh_TW (Traditional Chinese) yes en en
global.customLabels This parameter contains two fields: name and value. Insert customizable labels to group resources linked together. no name: environment value: prod name: environment value: prod
global.enablePrometheus Use to enable (true) or disable (false) Prometheus metrics no true true
  • Agent parameters

The following table lists the configurable parameters of the chart relative to the agent and an example of their values:

Parameter Description Mandatory Example Default
waagent.fsGroupId The secondary group ID of the user no 999
waagent.supplementalGroupId Supplemental group id of the user no
waagent.replicaCount Number of replicas to deploy yes 1 1
waagent.image.repository HCL Workload Automation Agent image repository yes @DOCKER.AGENT.IMAGE.NAME@ @DOCKER.AGENT.IMAGE.NAME@
waagent.image.tag HCL Workload Automation Agent image tag yes @VERSION@ @VERSION@
waagent.image.pullPolicy image pull policy yes Always Always
waagent.agent.name Agent display name yes WA_AGT WA_AGT
waagent.agent.tz If used, it sets the TZ operating system environment variable no America/Chicago
waagent.agent.networkpolicyEgress Customize egress policy. If empty, no egress policy is defined no See Network enablement
waagent.agent.dynamic.server.mdmhostname Hostname or IP address of the master domain manager no (mandatory if a server is not present inside the same namespace) wamdm.demo.com
waagent.agent.dynamic.server.port The HTTPS port that the dynamic agent must use to connect to the master domain manager no 31116 31116
waagent.agent.dynamic.pools* The static pools of which the Agent should be a member no Pool1, Pool2
waagent.agent.dynamic.useCustomizedCert If true, customized SSL certificates are used to connect to the master domain manager no false false
waagent.agent.dynamic.certSecretName The name of the secret to store customized SSL certificates no waagent-cert-secret
waagent.agent.containerDebug The container is executed in debug mode no no no
waagent.agent.livenessProbe.initialDelaySeconds The number of seconds after which the liveness probe starts checking if the server is running yes 60 60
waagent.resources.requests.cpu The minimum CPU requested to run yes 200m 200m
waagent.resources.requests.memory The minimum memory requested to run yes 200Mi 200Mi
waagent.resources.limits.cpu The maximum CPU requested to run yes 1 1
waagent.resources.limits.memory The maximum memory requested to run yes 2Gi 2Gi
waagent.persistence.enabled If true, persistent volumes for the pods are used no true true
waagent.persistence.useDynamicProvisioning If true, StorageClasses are used to dynamically create persistent volumes for the pods no true true
waagent.persistence.dataPVC.name The prefix for the Persistent Volumes Claim name no data data
waagent.persistence.dataPVC.storageClassName The name of the Storage Class to be used. Leave empty to not use a storage class no nfs-dynamic
waagent.persistence.dataPVC.selector.label Volume label to bind (only limited to single label) no my-volume-label
waagent.persistence.dataPVC.selector.value Volume label value to bind (only limited to single value) no my-volume-value
waagent.persistence.dataPVC.size The minimum size of the Persistent Volume no 2Gi 2Gi

(*) Note: for details about static agent workstation pools, see: Workstation.

  • Dynamic Workload Console parameters

The following table lists the configurable parameters of the chart relative to the console and an example of their values:

Parameter Description Mandatory Example Default
waconsole.fsGroupId The secondary group ID of the user no 999
waconsole.supplementalGroupId Supplemental group id of the user no
waconsole.replicaCount Number of replicas to deploy yes 1 1
waconsole.image.repository HCL Workload Automation Console image repository yes @DOCKER.CONSOLE.IMAGE.NAME@
waconsole.image.tag HCL Workload Automation Console image tag yes @VERSION@
waconsole.image.pullPolicy Image pull policy yes Always Always
waconsole.console.containerDebug The container is executed in debug mode no no no
waconsole.console.db.type The preferred remote database server type (e.g. DERBY, DB2, ORACLE, MSSQL, IDS). Use Derby database only for demo or test purposes. yes DB2 DB2
waconsole.console.db.hostname The Hostname or the IP Address of the database server yes <dbhostname>
waconsole.console.db.port The port of the database server yes 50000 50000
waconsole.console.db.name Depending on the database type, the name is different; enter the name of the Server's database for DB2/Informix/MSSQL, enter the Oracle Service Name for Oracle yes TWS TWS
waconsole.console.db.tsName The name of the DATA table space no TWS_DATA
waconsole.console.db.tsPath The path of the DATA table space no TWS_DATA
waconsole.console.db.tsTempName The name of the TEMP table space (Valid only for Oracle) no TEMP leave it blank
waconsole.console.db.tssbspace The name of the SB table space (Valid only for IDS). no twssbspace twssbspace
waconsole.console.db.user The database user who accesses the Console tables on the database server. In case of Oracle, it identifies also the database. It can be specified in a secret too yes db2inst1
waconsole.console.db.adminUser The database user administrator who accesses the Console tables on the database server. It can be specified in a secret too yes db2inst1
waconsole.console.db.sslConnection If true, SSL is used to connect to the database (Valid only for DB2) no false false
waconsole.console.db.usepartitioning Enable the Oracle Partitioning feature. Valid only for Oracle. Ignored for other databases no true true
waconsole.engineHostName By default, the value of this parameter is set to blank. Specify this parameter together with the waconsole.enginePort parameter so that an engine connection is automatically defined using the specified host name and port number after deployment of the console. no 01.102.104.104 blank
waconsole.enginePort By default, the value of this parameter is set to blank. Specify this parameter together with the waconsole.engineHostName parameter so that an engine connection is automatically defined using the specified host name and port number after deployment of the console. no 31116 blank
waconsole.console.pwdSecretName The name of the secret to store all passwords yes wa-pwd-secret wa-pwd-secret
waconsole.console.livenessProbe.initialDelaySeconds The number of seconds after which the liveness probe starts checking if the server is running yes 100 100
waconsole.console.useCustomizedCert If true, customized SSL certificates are used to connect to the Dynamic Workload Console no false false
waconsole.console.certSecretName The name of the secret to store customized SSL certificates no waconsole-cert-secret
waconsole.console.libConfigName The name of the ConfigMap to store all custom liberty configuration no libertyConfigMap
waconsole.console.routes.enabled If true, the ingress controller rules is enabled no true true
waconsole.resources.requests.cpu The minimum CPU requested to run yes 1 1
waconsole.resources.requests.memory The minimum memory requested to run yes 4Gi 4Gi
waconsole.resources.limits.cpu The maximum CUP requested to run yes 4 4
waconsole.resources.limits.memory The maximum memory requested to run yes 16Gi 16Gi
waconsole.persistence.enabled If true, persistent volumes for the pods are used no true true
waconsole.persistence.useDynamicProvisioning If true, StorageClasses are used to dynamically create persistent volumes for the pods no true true
waconsole.persistence.dataPVC.name The prefix for the Persistent Volumes Claim name no data data
waconsole.persistence.dataPVC.storageClassName The name of the StorageClass to be used. Leave empty to not use a storage class no nfs-dynamic
waconsole.persistence.dataPVC.selector.label Volume label to bind (only limited to single label) no my-volume-label
waconsole.persistence.dataPVC.selector.value Volume label value to bind (only limited to single label) no my-volume-value
waconsole.persistence.dataPVC.size The minimum size of the Persistent Volume no 5Gi 5Gi
waconsole.console.exposeServiceType The network enablement configuration implemented. Valid values: LOAD BALANCER or INGRESS yes INGRESS
waconsole.console.exposeServiceAnnotation Annotations of either the resource of the service or the resource of the ingress, customized in accordance with the cloud provider yes
waconsole.console.networkpolicyEgress Customize egress policy. If empty, no egress policy is defined no See Network enablement
waconsole.console.ingressHostName The virtual nostname defined in the DNS used to reach the Console. yes, only if the network enablement implementation is INGRESS
waconsole.console.ingressSecretName The name of the secret to store certificates used by ingress. If not used, leave it empty. yes, only if the network enablement implementation is INGRESS. wa-console-ingress-secret
  • Server parameters

The following table lists the configurable parameters of the chart and an example of their values:

Parameter Description Mandatory Example Default
waserver.replicaCount Number of replicas to deploy yes 1 1
waserver.image.repository HCL Workload Automation server image repository yes <repository_url> The name of the image server repository
waserver.image.tag HCL Workload Automation server image tag yes 1.0.0 the server image tag
waserver.image.pullPolicy Image pull policy yes Always Always
waserver.fsGroupId The secondary group ID of the user no 999
waserver.server.company The name of your Company no my-company my-company
waserver.server.agentName The name to be assigned to the dynamic agent of the Server no WA_SAGT WA_AGT
waserver.server.dateFormat The date format defined in the plan no MM/DD/YYYY MM/DD/YYYY
waserver.server.timezone The timezone used in the create plan command no America/Chicago
waserver.server.startOfDay The start time of the plan processing day in 24 hour format: hhmm no 0000 0700
waserver.server.tz If used, it sets the TZ operating system environment variable no America/Chicago
waserver.server.createPlan If true, an automatic JnextPlan is executed at the same time of the container deployment no no no
waserver.server.containerDebug The container is executed in debug mode no no no
waserver.server.db.type The preferred remote database server type (e.g. DERBY, DB2, ORACLE, MSSQL, IDS) yes DB2 DB2
waserver.server.db.hostname The Hostname or the IP Address of the database server yes
waserver.server.db.port The port of the database server yes 50000 50000
waserver.server.db.name Depending on the database type, the name is different; enter the name of the Server's database for DB2/Informix/MSSQL, enter the Oracle Service Name for Oracle yes TWS TWS
waserver.server.db.tsName The name of the DATA table space no TWS_DATA
waserver.server.db.tsPath The path of the DATA table space no TWS_DATA
waserver.server.db.tsLogName The name of the LOG table space no TWS_LOG
waserver.server.db.tsLogPath The path of the LOG table space no TWS_LOG
waserver.server.db.tsPlanName The name of the PLAN table space no TWS_PLAN
waserver.server.db.tsPlanPath The path of the PLAN table space no TWS_PLAN
waserver.server.db.tsTempName The name of the TEMP table space (Valid only for Oracle) no TEMP leave it empty
waserver.server.db.tssbspace The name of the SB table space (Valid only for IDS) no twssbspace twssbspace
waserver.server.db.usepartitioning If true, the Oracle Partitioning feature is enabled. Valid only for Oracle, it is ignored by other databases. The default value is true no true true
waserver.server.db.user The database user who accesses the Server tables on the database server. In case of Oracle, it identifies also the database. It can be specified in a secret too yes db2inst1
waserver.server.db.adminUser The database user administrator who accesses the Server tables on the database server. It can be specified in a secret too yes db2inst1
waserver.server.db.sslConnection If true, SSL is used to connect to the database (Valid only for DB2) no false false
waserver.server.pwdSecretName The name of the secret to store all passwords yes wa-pwd-secret wa-pwd-secret
waserver.livenessProbe.initialDelaySeconds The number of seconds after which the liveness probe starts checking if the server is running yes 600 850
waserver.readinessProbe.initialDelaySeconds The number of seconds before the prob starts checking the readiness of the server yes 600 530
waserver.server.useCustomizedCert If true, customized SSL certificates are used to connect to the master domain manager no false false
waserver.server.certSecretName The name of the secret to store customized SSL certificates no waserver-cert-secret
waserver.server.libConfigName The name of the ConfigMap to store all custom liberty configuration no libertyConfigMap
waserver.server.routes.enabled If true, the routes controller rules is enabled no true true
waserver.server.routes.hostname The virtual hostname defined in the DNS used to reach the Server no server.mycluster.proxy
waserver.resources.requests.cpu The minimum CPU requested to run yes 1 1
waserver.resources.requests.memory The minimum memory requested to run yes 4Gi 4Gi
waserver.resources.limits.cpu The maximum CUP requested to run yes 4 4
waserver.resources.limits.memory The maximum memory requested to run yes 16Gi 16Gi
waserver.persistence.enabled If true, persistent volumes for the pods are used no true true
waserver.persistence.useDynamicProvisioning If true, StorageClasses are used to dynamically create persistent volumes for the pods no true true
waserver.persistence.dataPVC.name The prefix for the Persistent Volumes Claim name no data data
waserver.persistence.dataPVC.storageClassName The name of the StorageClass to be used. Leave empty to not use a storage class no nfs-dynamic
waserver.persistence.dataPVC.selector.label Volume label to bind (only limited to single label) no my-volume-label
waserver.persistence.dataPVC.selector.value Volume label value to bind (only limited to single value) no my-volume-value
waserver.persistence.dataPVC.size The minimum size of the Persistent Volume no 5Gi 5Gi
waserver.server.exposeServiceType The network enablement configuration implemented. Valid values: LOAD BALANCER or INGRESS yes INGRESS
waserver.server.exposeServiceAnnotation Annotations of either the resource of the service or the resource of the ingress, customized in accordance with the cloud provider yes
waserver.server.networkpolicyEgress Customize egress policy. If empty, no egress policy is defined no See Network enablement
waserver.server.ingressHostName The virtual hostname defined in the DNS used to reach the Server yes, only if the network enablement implementation is INGRESS
waserver.server.ingressSecretName The name of the secret to store certificates used by the ingress. If not used, leave it empty yes, only if the network enablement implementation is INGRESS wa-server-ingress-secret
waserver.server.licenseServerId The ID of the license server yes
waserver.server.licenseServerUrl The URL of the license server no

Configuring

The following procedures are ways in which you can configure the default deployment of the product components. They include the following configuration topics:

Network enablement

The HCL Workload Automation server and console can use two different ways to route external traffic into the Amazon Web Services Elastic Kubernetes Service cluster:

  • A load balancer service that redirects traffic
  • An ingress service that manages external access to the services in the cluster

You can freely switch between these two types of configuration.

Network policy

You can optionally specify an egress policy for the server, Dynamic Workload Console and agent. See the following example, which allows egress to another destination:

   networkpolicyEgress:
- name: to-mdm
  egress:
  - to:
    - podSelector:
        matchLabels:
	  app.kubernetes.io/name: waserver
    - port: 31116
        protocol: TCP
    - name: dns
      egress:
      - to:
          - namespaceSelector:
              matchLabels:
                name: kube-system
     - ports:
         - port: 53
           protocol: UDP
         - port: 53
           protocol: TCP

For more information, see Network Policies.

Load balancer service

  • Server:

    To configure a load balancer for the server, follow these steps:

  1. Locate the following parameters in the values.yaml file:

     exposeServiceType
     exposeServiceAnnotation
    

For more information about these configurable parameters, see the Server parameters table.

  1. Set the value of the exposeServiceType parameter to LoadBalancer.

  2. In the exposeServiceAnnotation section, uncomment the lines in this section as follows:

     service.beta.kubernetes.io/aws-load-balancer-type: nlb
     service.beta.kubernetes.io/aws-load-balancer-internal: "true"
    
  3. Specify the load balancer type and set the load balancer to internal by specifying "true".

  • Console:

    Because of a limitation in AWS EKS with stick sessions, you can only use the load balancer on the console component for a single console instance. Configure the load balancer for the console as follows:

  1. Locate the following parameters in the values.yaml file:

     exposeServiceType
     exposeServiceAnnotation
    

    For more information about these configurable parameters, see the Console parameters table.

  2. Set the value of the exposeServiceTypeparameter to LoadBalancer.

  3. In the exposeServiceAnnotation section, uncomment the lines in this section as follows:

     service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https
     service.beta.kubernetes.io/aws-load-balancer-type: "clb"
     #service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
     service.beta.kubernetes.io/aws-load-balancer-internal: "true"
    
  4. Specify the load balancer protocol and type.

  5. Set the load balancer to internal by specifying "true".

Ingress service

  • Server:

    To configure an ingress for the server, follow these steps:

  1. Locate the following parameters in the values.yaml file:

     exposeServiceType
     exposeServiceAnnotation
    

    For more information about these configurable parameters, see the Server parameters table.

  2. Set the value of the exposeServiceTypeparameter to Ingress.

  3. In the exposeServiceAnnotation section, leave the following lines as comments:

     #service.beta.kubernetes.io/aws-load-balancer-type:nlb
     #service.beta.kubernetes.io/aws-load-balancer-internal: "true"
    
  • Console:

    To configure an ingress for the console, follow these steps:

  1. Locate the following parameters in the values.yaml file:

     exposeServiceType
     exposeServiceAnnotation
    

For more information about these configurable parameters, see the Console parameters table.

  1. Set the value of the exposeServiceTypeparameter to Ingress.

  2. In the exposeServiceAnnotation section, uncomment only the line related to the cert-manager issuer and set the value. Leave the other lines as comments:

      cert-manager.io/issuer: wa-ca-issuer
     #service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https
     #service.beta.kubernetes.io/aws-load-balancer-type: "clb"
     #service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
     #service.beta.kubernetes.io/aws-load-balancer-internal: "true"
    

Enabling communication between product components in an on-premises offering with components in the Cloud

Follow these steps to manage certificates with on-premises components.

On-premises agents: To correctly trigger actions specified in the event rules defined on the agent, install an on-premises agent passing the -gateway local parameter.

To configure an on-premises agent to communicate with components in the cloud:

  1. Install the agent using the twinst script on a local computer passing the -gateway local parameter. If there are already other agents installed, the EIF port might already be in use by another instance. Specify a different port by passing the following parameter to the twsinst script: -gweifport <free-port>.

  2. Make a copy of the following AWS cloud server certificates located in the following path <DATA_DIR>/ITA/cpa/ita/cert:

  • TWSClientKeyStoreJKS.sth
  • TWSClientKeyStoreJKS.jks
  • TWSClientKeyStore.sth
  • TWSClientKeyStore.kdb
  1. Replace the files on the on-premises agent in the same path.

On-premises console engine connection (connection between an on-premises console with a server in the cloud):

  1. Copy the public CA root certificate from the. Refer to the HCL Workload Automation product documentation for details about creating custom certificates for communication between the server and the console: Customizing certificates.

  2. To enable the changes, restart the Console workstation.

On-premises engine connection (connection between a console in the AWS cloud and an on-premises engine or another engine in a different namespace):

Access the master (server or pod) and extract the CA root certificate and, to add it to the console trustkeystore, create a secret in the console namespace with the extracted key encoded in base64 as follows:

	kind: Secret
	apiVersion: v1
	metadata:
	  name: <yourcert-server-crt>
	  namespace: <worklaod_automation_namespace>
	  labels:
	    wa-import: 'true'
	  annotations:
	data:
	  tls.crt: <base64_encoded_certificate>
	type: Opaque

Scaling the product

By default a single server, console, and agent is installed. If you want to change the topology for HCL Workload Automation, then increase or decrease the values of the replicaCount parameter in the values.yaml file for each component and save the changes.

Scaling up or down

To scale one or more HCL Workload Automation components up or down:

Modify the values of the replicaCount parameter in the values.yaml file for each component accordingly, and save the changes.

Note: When you scale up a server, the additional server instances are installed with the Backup Master role, and the workstation definitions are automatically saved to the HCL Workload Automation relational database. To complete the scaling up of a server component, run JnextPlan -for 0000 -noremove from the server that has the role of master domain manager to add new backup master workstations to the current plan. The agent workstations installed on the new server instances are automatically saved in the database and linked to the Broker workstation with no further manual actions.

Note:

  • When you scale down each type of component, the persistent volume (PV) that the storage class created for the pod instance is not deleted to avoid losing data should the scale down not be desired. When you need to perform a subsequent scaling up, new component instances are installed by using the old PVs.
  • When you scale down server or agent component, the workstation definitions are not removed from the database, so you can manually delete them or set them to ignore to avoid having a non-working workstation in the plan. If you need an immediate change to the plan, run the following command from the master workstation:
    JnextPlan -for 0000 -remove

Scaling to 0

The HCL Workload Automation Helm chart does not support automatic scaling to zero. If you want to manually scale the Dynamic Workload Console component to zero, set the value of the replicaCount parameter to zero. To maintain the current HCL Workload Automation scheduling and topology, do not set the replicaCount value for the server and agent components to zero.

Proportional scaling

The HCL Workload Automation Helm chart does not support proportional scaling.

Managing your custom certificates

If you use customized certificates, useCustomizedCert:true, you must create a secret containing the customized files that will replace the Server default ones in the <workload_automation_namespace>. Customized files must have the same name as the default ones.

  • TWSClientKeyStoreJKS.sth
  • TWSClientKeyStore.kdb
  • TWSClientKeyStore.sth
  • TWSClientKeyStoreJKS.jks
  • TWSServerTrustFile.jks
  • TWSServerTrustFile.jks.pwd
  • TWSServerKeyFile.jks
  • TWSServerKeyFile.jks.pwd
  • ltpa.keys (The ltpa.keys certificate is required only if you use Single Sign-On with LTPA)

If you want to use custom certificates, set useCustomizedCert:true and use kubectl to apply the secret in the <workload_automation_namespace>. For the master domain manager, type the following command:

kubectl create secret generic waconsole-cert-secret --from-file=TWSServerKeyFile.jks --from-file=TWSServerKeyFile.jks.pwd --from-file=TWSServerTrustFile.jks --from-file=TWSServerTrustFile.jks.pwd --from-file=ltpa.keys -n <workload_automation_namespace>   

For the Dynamic Workload Console, type the following command:

 kubectl create secret generic waconsole-cert-secret --from-file=TWSServerKeyFile.jks --from-file=TWSServerKeyFile.jks.pwd --from-file=TWSServerTrustFile.jks --from-file=TWSServerTrustFile.jks.pwd --from-file=ltpa.keys -n <workload_automation_namespace>   
   

For the dynamic agent, type the following command:

kubectl create secret generic waagent-cert-secret --from-file=TWSClientKeyStore.kdb --from-file=TWSClientKeyStore.sth --from-file=TWSClientKeyStoreJKS.jks --from-file=TWSClientKeyStoreJKS.sth -n <workload_automation_namespace>    

where, TWSClientKeyStoreJKS.sth, TWSClientKeyStore.kdb, TWSClientKeyStore.sth, TWSClientKeyStoreJKS.jks, TWSServerTrustFile.jks and TWSServerKeyFile.jks are the Container keystore and stash file containing your customized certificates.

For details about custom certificates, see Connection security overview.

Note: Passwords for "TWSServerTrustFile.jks" and "TWSServerKeyFile.jks" files must be entered in the respective "TWSServerTrustFile.jks.pwd" and "TWSServerKeyFile.jks.pwd" files.

(**) Note: if you set db.sslConnection:true, you must also set the useCustomizeCert setting to true (on both server and console charts) and, in addition, you must add the following certificates in the customized SSL certificates secret on both the server and console charts:

  • TWSServerTrustFile.jks
  • TWSServerKeyFile.jks
  • TWSServerTrustFile.jks.pwd
  • TWSServerKeyFile.jks.pwd

Customized files must have the same name as the ones listed above.

If you want to use SSL connection to DB, set db.sslConnection:true and useCustomizedCert:true, then use kubectl to create the secret in the same namespace where you want to deploy the chart:

  bash
  $ kubectl create secret generic release_name-secret --from-file=TWSServerTrustFile.jks --from-file=TWSServerKeyFile.jks --from-file=TWSServerTrustFile.jks.pwd --from-file=TWSServerKeyFile.jks.pwd --namespace=<workload_automation_namespace>

If you define custom certificates, you are in charge of keeping them up to date, therefore, ensure you check their duration and plan to rotate them as necessary. To rotate custom certificates, delete the previous secret and upload a new secret, containing new certificates. The pod restarts automatically and the new certificates are applied.

Installing Automation Hub plug-ins

You can extend HCL Workload Automation with a number of out-of-the-box integrations available on Automation Hub.

To download the plug-ins:

  1. Download the plug-ins from Automation Hub and extract the contents to a folder on the local computer.

  2. From your local computer, go to the <plug-ins_folder> folder and run the following command:

      kubectl cp <plugin_name.jar> <namespace>/<master-pod>:/home/wauser/wadata/wa_plugins/<plugin_name.jar>
    

where, <master_pod> is the workstation with the master role. The default is <release-name>-waserver-0.

  1. Access the <master_pod> pod terminal and run the following command:

cp /home/wauser/wadata/wa_plugins/<plugin_name.jar> /opt/wa/TWS/applicationJobPlugIn/<plugin_name.jar>

  1. To grant execution permissions for the .jar file, run:

chmod 755 /opt/wa/TWS/applicationJobPlugIn/<plugin_name.jar>

  1. From the <master_pod> pod, to restart application server, run:
  • /opt/wa/appservertools/appserverstart.sh stop
  • /opt/wa/appservertools/appserverstart.sh start

NOTE: Each time you restart the master pod, the plug-in you downloaded is removed. After a restart, rerun the procedure from step 5. to step 7.

NOTE:

A Kubernetes job plug-in is available on Automation Hub. It enables you to automate the execution of Kubernetes jobs. https://www.yourautomationhub.io/detail/kubernetes_9_5_0_02. Follow the steps in the previous procedure to download and install the plug-in.

Storage

Storage requirements for the workload

HCL Workload Automation requires persistent storage for each component (server, console and agent) that you deploy to maintain the scheduling workload and topology.

To make all of the configuration and runtime data persistent, the Persistent Volume you specify must be mounted in the following container folder:

/home/wauser

The Pod is based on a StatefulSet. This guarantees that each Persistent Volume is mounted in the same Pod when it is scaled up or down.

For test purposes only, you can configure the chart so that persistence is not used.

HCL Workload Automation can use either dynamic provisioning or static provisioning using a pre-created persistent volume
to allocate storage for each component that you deploy. You can pre-create Persistent Volumes to be bound to the StatefulSet using Label or StorageClass. It is highly recommended to use persistence with dynamic provisioning. In this case, you must have defined your own Dynamic Persistence Provider. HCL Workload Automation supports the following provisioning use cases:

  • Kubernetes dynamic volume provisioning to create both a persistent volume and a persistent volume claim. This type of storage uses the default storageClass defined by the Kubernetes admin or by using a custom storageClass which overrides the default. Set the values as follows:

    • persistence.enabled:true (default)
    • persistence.useDynamicProvisioning:true(default)

Specify a custom storageClassName per volume or leave the value blank to use the default storageClass.

  • Persistent storage using a predefined PersistentVolume set up prior to the deployment of this chart. Pre-create a persistent volume. If you configure the label=value pair described in the following Note, then the persistent volume claim is automatically generated by the Helm chart and bound to the persistent volume you pre-created. Set the global values as follows:

    • persistence.enabled:true
    • persistence.useDynamicProvisioning:false

Note: By configuring the following two parameters, the persistent volum claim is automatically generated. Ensure that this label=value pair is inserted in the persistent volume you created:

  • <wa-component>.persistence.dataPVC.selector.label
  • <wa-component>.persistence.dataPVC.selector.value

Let the Kubernetes binding process select a pre-existing volume based on the accessMode and size. Use selector labels to refine the binding process.

Before you deploy all of the components, you have the opportunity to choose your persistent storage from the available persistent storage options in AWS Elastic Kubernetes Service that are supported by HCL Workload Automation or, you can leave the default storageClass.
For more information about all of the supported storage classes, see the table in Storage classes static PV and dynamic provisioning.

If you create a storageClass object or use the default one, ensure that you have a sufficient amount of backing storage for your HCL Workload Automation components.
For more information about the required amount of storage you need for each component, see the Resources Required section.

Custom storage class:
Modify the the persistence.dataPVC.storageClassName parameter in the YAML file by specifying the custom storage class name, when you deploy the HCL Workload Automation product components.

Default storage class:
Leave the values for the persistence.dataPVC.storageClassName parameter blank in the YAML file when you deploy the HCL Workload Automation product components.
For more information about the storage parameter values to set in the YAML file, see the tables, Agent parameters, Dynamic Workload Console parameters, and Server parameters (master domain manager).

File system permissions

File system security permissions need to be well known to ensure uid, gid, and supplemental gid requirements can be satisfied. On Kubernetes native, UID 999 is used.

Persistent volume storage access modes

HCL Workload Automation supports only ReadWriteOnce (RWO) access mode. The volume can be mounted as read-write by a single node.

Metrics monitoring

HCL Workload Automation uses Grafana to display performance data related to the product and metrics related to the server and console application servers (WebSphere Application Server Liberty Base). Grafana is an open source tool for visualizing application metrics. Metrics provide insight into the state, health, and performance of your deployments and infrastructure. HCL Workload Automation cloud metric monitoring uses an opensource Cloud Native Computing Foundation (CNCF) project called Prometheus.

The following metrics are collected and available to be visualized in the preconfigured Grafana dashboard. The dashboard is named <workload_automation_namespace> <workload_automation_release_name>:

Metric Name Description
Workload Workload by job status: SUCC, ABEND, HOLD, CANCEL, FAIL, READY, SUPPR
PODs For server, console and agent pods: pod restarts, failing pods, CPU usage, and RAM usage. For the console and server: Network I/O
Persistent Volumes For server, console, and agent: volume capacity (used, free)
WA Server - Internal Message Queues Queue usage for mirrorbox.msg, Mailbox.msg, Intercom.msg, and Courier.msg
WA Server - Liberty Heap usage percentage, active sessions, live sessions, active threads, threadpool size, time per garbage collection cycle moving average
WA Sever - Connection Pools (Liberty) average time usage per connection over last, managed connections, free connections, connection handles, created and destroyed connections
WA Console - Liberty Heap usage percentage, active sessions, live sessions, active threads, threadpool size, time per garbage collection cycle moving average
WA Console - Connection Pools (Liberty) average time usage per connection over last, managed connections, free connections, connection handles, created and destroyed connections

The following is an example of the various metrics available with focus on the workload job status:

Job status

The following is an exammple of how persistent volume capacity for the server, console and agent are visualized:

Job status

Setting the Grafana service on the EKS cluster

Before you set the Grafana service on the EKS cluster, ensure that you have already installed Grafana and Prometheus on the EKS cluster. For information about deploying Grafana see Install Grafana. For information about deploying the open-source Prometheus project see Download Promotheus.

  1. Log in to the EKS cluster. To identify where Grafana is deployed, retrieve the value for the <grafana-namespace> by running:

       helm list -A
    
  2. Download the grafana_values.yaml file by running:

     helm get values grafana -o yaml -n <grafana-namespace> grafana_values.yaml
    
  3. Modify the grafana_values.yaml file by setting the following parameter values:

     dashboards:
          SCProvider: true
          enabled: true
          folder: /tmp/dashboards
          label: grafana_dashboard
          provider:
     	 allowUiUpdates: false
     	 disableDelete: false
     	 folder: ""
     	 name: sidecarProvider
     	 orgid: 1
     	 type: file
          searchNamespace: ALL
    
  4. Update the grafana_values.yaml file in the Grafana pod by running the following command:

    helm upgrade grafana stable/grafana -f grafana_values.yaml -n <grafana-namespace>

  5. To access the Grafana console:

    a. Get the EXTERNAL-IP address value of the Grafana service by running:

     kubectl get services -n <grafana-namespace>
    

    b. Browse to the EXTERNAL-IP address and log in to the Grafana console.

Viewing the preconfigured dashboard in Grafana

To get an overview of the cluster health, you can view a selection of metrics on the predefined dashboard:

  1. In the left navigation toolbar, click Dashboards.

  2. On the Manage page, select the predefined dashboard named <workload_automation_namespace> <workload_automation_release_name>.

For more information about using Grafana dashboards see Dashboards overview.

Limitations

  • Limited to amd64 platforms.
  • Anonymous connections are not permitted.
  • When sharing Dynamic Workload Console resources, such as tasks, engines, scheduling objects and so on, to groups, ensure the user sharing the resource is a member of the group to which the user is sharing the resourc.
  • On AWS, LDAP configuration on the chart is not supported. Manual configuration is required using the traditional LDAP configuration.

Documentation

To access the complete product documentation library for HCL Workload Automation, see https://workloadautomation.hcldoc.com/help/index.jsp.

Troubleshooting

In case of problems related to deploying the product with containers, see Troubleshooting.

Known problems

Problem: The broker server cannot be contacted. The Dynamic Workload Broker command line requires additional configuration steps.

Workaround: Perform the following configuration steps to enable the Dynamic Workload Broker command line:

  1. From the machine where you want to use the Dynamic Workload Broker command line, master domain manager (server) or dynamic agent, locate the following file:

    /home/wauser/wadata/TDWB_CLI/config/CLIConfig.properties

  2. Modify the values for the fields, keyStore and trustStore, in the CLIConfig.properties file as follows:

    keyStore=/home/wauser/wadata/ITA/cpa/ita/cert/TWSClientKeyStoreJKS.jks

    trustStore=/home/wauser/wadata/ITA/cpa/ita/cert/TWSClientKeyStoreJKS.jks

  3. Save the changes to the file.

hcl-workload-automation-chart's People

Contributors

federicoyst avatar psini avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.