Code Monkey home page Code Monkey logo

process-engine-api's Introduction

Contributors Forks Stargazers Issues


This repository contains the BPM-Crafters website as well as various best-practices and the api-client-documentation which can be found under https://bpm-crafters.dev

Documentation Guidelines

PRs for every change

All changes have to be done in a separate Branch. As soon as the changes are done please open a PR. A GitHub Action runs with every commit to a Branch and checks if the documentation can be built. If you create a new branch make sure to name it according to what it does (e.g. feat/xyz or fix/xyz). Please use semantic commit messages as described in here.

Structure

Name Markdown files according to the title. This makes it easier to find a file. Avoid non-alphanumeric characters in titles. Use the file name as an internal document id to reference in the appropriate sidebars file.

Style Guide

We will be using the writing style guide defined by Camunda. It outlines writing techniques and practices to ensure uniform styling across documentation and to yield a more cohesive and organized user experience.

Setup

Installation

npm install

Local Development

npm run start

Troubleshooting Checklist

Have you pulled latest from main? Have you run npm install? When we update dependencies in the project, they don't automatically get updated in your environment. You'll need to run npm install occasionally to acquire dependency updates locally.

Creating new files

If you have created a new file for the documentation you always need to make sure that it contains a proper header:

---
id: best-practices-overview
title: Best Practices Overview
sidebar_label: Overview
description: "This section provides an overview of the different BPM Best Pracitces."
---

If the page should show up in the sidebar it needs to be added to the sidebar.js. By default, this should always be the case.

Configuration

This documentation is built using Docusaurus 3, a modern static website generator. The framework is well documented and is used by many (open source) projects. The documentation can be customized by setting parameters in docusaurus.config.js. Parameters are described here: https://v2.docusaurus.io/docs/docusaurus.config.js.

process-engine-api's People

Contributors

jangalinski avatar pschalk avatar zambrovski avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

process-engine-api's Issues

Provide Micronaut implementation

Scenario

  • Library version: 0.0.1
  • Description of your use case: I'm not using SpringBoot but micronaut and want to have native support by the library.

Current Behaviour

  • Core library provides basic functionality
  • SpringBoot Starter provide SpringBoot support

Wanted Behaviour

  • Additional artefact provides micronaut integration

Possible Workarounds

  • Use Core modules directly and wire them into application micronaut code

Provide a better user task delivery strategy for C8

Scenario

  • Description of your use case: in order not to loose tasks and get out of sync, we need a better task delivery strategy for user tasks.

Current Behaviour

  • Only pull strategy is implemented

Wanted Behaviour

  • A strategy for full syn is desired (not only getting new tasks)

Possible Workarounds

Provide camunda cockpit for C7 example

Scenario

We start the process via swagger and watch the logs ... but it would be nice to also check state in the camunda cockpit.

Setup:

  • camunda cockpit CE
  • history plugin
  • auto login

Create a naive C7 implementation

Scenario

In order to make sure we have a good foundation for discussion, let us implement a simple naive implementation of all APIs.

This implementation should be directly (without any additional libraries) used by the Java Example described in #1

Create Java-based example of the API invocations.

Scenario

In order to make sure that the API written in Kotlin is callable from Java, provide an example for invocation of all methods of the API. By doing so, make sure that the API is not using any Kotlin constructs which are unhandy from Java.

Make sure strategies are re-synced after restart

Scenario

A non-pull delivery strategy (e.g. job-based embedded for user tasks) delivers a user task and restarts. Now the new started instance doesn't know that it delivered a user-task anymore. If now the task is terminated (instance killed, task is cancelled or something else) in the engine, the newly started delivery strategy should mark the task as non-existent. Currently it is not possible, because after the restart we loose the activatedSubscriptions with their callbacks.

Idea

For job-based delivery strategy there still should be an initial pull to sync with tasks in the engine.

Problem

The initial pull solves the problem of missed tasks (created before start). If the task doesn't exist anymore, we have a problem of deleting a potentially existed task, that is not in the engine anymore.

Possible solution alternatives

  1. The activeSubscriptions provides a persistence for already delivered tasks. This is then synced with the tasks delivered by the initial pull and on the missing ones we execute the terminate handler.
  2. The job-based strategy makes an initial pull to get the tasks existing in the engine.

Favorite solution

Let's try to avoid the need for persistence and use the initial pull for the job-based strategy.

Further discussion

@stephanpelikan: it seems that the persistence for activeSubscription is not needed. This is your preferred solution. Could you mentally go through the delivery strategies and think of any other corner-cases, before we consider this issue as solved?

Adopt variable-list to support "no variables"

Scenario

As a developer, on passing variable-lists, I want to have three use-cases supported:

  1. Retrieve all variables available: List parameter given as "null"
  2. Retrieve no variables: List parameter given, containing no items
  3. Retrieve some variables: List parameter given, containing the variables names as items

Allow multiple adapter implementations in runtime at the same time

Scenario

  • Library version: 0.0.1
  • Description of your use case: I want to be able to have multiple process-engine-api adapters in runtime at the same time, in order to allow for migrations between different engines.

Current Behaviour

  • only one instance of process-engine-api ports is available at runtime by SpringBoot Starter

Wanted Behaviour

  • several instances are available

Possible Workarounds

  • none

Create a possibility to react on task removal after delivery to a handler

Current

After the subscription, the task handler will eventually be invoked with a concrete TaskInformation and payload. For the synchronous execution this is ok, since the handler can call TaskApi.complete method. For the asynchronous execution (the handler call stack is interrupted between the delivery of the task and task completion) this will be a problem, if the task is not available anymore.

Example

The handler is creating a pool of tasks available (e.g. for user tasks), the task gets completed or deleted and there is no way to signal this to the pool.

Proposal

During the registration, we provide an additional callback to be called by the implementation if the task is not available AFTER it has been passed to the handler.

Usage examples

  • External task times out
  • External task is deleted
  • User task is completed / deleted

Conclusion

on events above the task handler gets notified that the task is not available anymore and can react accordingly.

Provide meta information about the task (Map<String, String>) to the task handler

Current situation

During the subscription, we can specify some restrictions to be applied. Then, on delivery of the task to the subscribed handler, we currently only pass the taskId and payload to the handler.

This is good, but not enough.

Examples

  • I want to have a handler for multiple task types and need to distinguish them
  • In case of user task management, I probably will build one handler for all tasks, which is the source for the some kind of projection, responsible for answering queries about tasks. To be able to build this projection, I'll need a set of attributes about the task..

Idea

Deliver a Map<String, String> to the task handler for every task.

Additions

This is similar to what we decided to deliver back after starting the process instance. Depending on the adapter there might be different keys in this map...

Avoid nearly engine-specific API for task subscription

Scenario

How to define a task description which will work with any BPMN 2.0 based process-engine.

Current Behaviour

In SubscribeForTaskCmd.kt a taskDefinitionKey is used to define the task one wants to subscribe to. I ask myself whether a task's definition key is really available to all engines. Instead the activity's ID is available for sure. In BPMN 2.0 there is a special attribute implementation for naming implementation URIs and there is also an operation attribute having an attribute implementationRef decribed by: This attribute allows to reference a concrete artifact in the underlying implementation technology representing that operation, such as a WSDL operation. Maybe we should name it implementationRef or implementation instead. Actually, also operation consists of an input message specification which would replace the attribute payloadDescription. Some engines like Camunda also define input-mappings.

Additionally, I asked myself whether the taskType is specific to Camunda, but from a BPMN 2.0 point of view there is also a difference: The operation attribute is only available to service-like-tasks and the implementation attribute is available to all tasks. So, from the BPMN 2.0 point of view there is a difference between user-tasks and service-like-tasks and therefore we can pass this attribute as it is. But on the other hand, a process-engine also might also have different specific APIs for all kind of tasks. So, why not list of kind of tasks in TaskType and let the adapter decide what that means to the specific engine implementation?

Also, I miss a possibility to define processDefinitionKey or a tenantId to limit subscriptions to that specification. This is not possible for Camunda 8 but for Camunda 7 and maybe also other engines.

Wanted Behaviour

I think we have to add a comment to taskDescriptionKey:

/*
May refer to BPMN 2.0 attribute `implementation` or `operation[@implementationRef]` or
any engine-specific attribute of the task XML-tag. As a fallback an adapter-implementation
should also accept a task's `id` attribute, since this the only common attribute in all engines.
*/

If think the first sentence is quite OK. What do you think about the second sentence and it's implication?

An implicit question in regard to the second sentence is: Do we want the engine adapter to be aware of the current BPMN deployed and all previously deployed and still active versions? If the adapter has access to the BPMNs it is easy to do fallbacks like checking for the activity's id. If we decide that the adapter must work without any BPMN reference then value of taskDescriptionKey is not generic any more. Maybe we should skip the second sentence and any higher order adapter (like VanillaBP) may handle this.

Regarding taskType, processDefinitionKey, tenantId and others: Maybe we should define a map instead of taskType which holds that values. Additionally, there might be engine-specific keys like tenantId which may be filled by higher order adapters:

/*
 * Any additional engine-specific restrictions to limit this subscription for. Examples:
 * 1. `taskType`: `USER_TASK`
 * 2. `tenantId`: `myAwesomeTenant`
 * Checkout the engine-adapter for keys available and their meaning.
 */
restrictions: Map<String, String>;

Create local setup with docker-compose

Scenario

  • Library version:
  • Description of your use case: (detailed description or executable reproducer, e.g. GitHub repo)

Current Behaviour

Wanted Behaviour

Possible Workarounds

Implement C7 remote adapter

Scenario

  • Library version: 0.0.1
  • Description of your use case: I want to use C7 in standalone mode and connect as an application to it. This is sometimes a first step towards migration to C8, so application move from C7-embedded to C7-remote to C8.

Correlation incompatibility

Expected behaviour

For message correlation (as for any part of process api) it should be possible to implement code that is portable between engines.

Actual behaviour

C8 requires: correlationKey -> in BPMN it is FEEL expression, usually pointing the variable name
C8 supports: messageId, tenantId, message TTL

C7 requires: nothing
C7 supports: processDefinitionId, processInstanceId, executionId, localVariable, processVariable, tenantId

How can I develop a correlation client in a way, that it works with both engines without modifications? Currently used () -> Correlation(restrictions: Map<String, String>) seems to be not sufficient for this...

Soeven if I want to do the same (correlate on process variable), I'll need to deliver for C7:

Map.of("processVariable", "my-value")

and in C8:

Map.of("correlationKey", "my-value")

Any ideas?

Support subscriptions for expressions

Scenario

Some process-engines do not store business values and therefore do not evaluate expressions (e.g. of flows of exclusive gateways) on their own. They delegate this to the client instead.

Current Behaviour

Missing possibility to subscribe to expressions.

Wanted Behaviour

Add another command for subscribing to expressions to be evaluated:

data class SubscribeForExpressionsCmd(
  var restrictions: Map<String, String>,
  val processDefinitionKey: String,
  val payloadDescription: Set<String>,
  val action: (expression: String, payload: Map<String, Any>) -> Any
)

Expressions do not belong to task but to processes instead. Therefore the taskDefinitionKey changed to processDefinitionKey. Additionally the action takes the expression to be resolved and returns the result.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.