Code Monkey home page Code Monkey logo

zeebe-spec's Introduction

Zeebe Spec

License

Compatible with: Camunda Platform 8

A tool to run tests for BPMN processes on Zeebe.

The idea

Install

The tests can be run by calling the SpecRunner directly in code, or by using the JUnit integration.

JUnit Integration

  1. Add the Maven dependencies:
<dependency>
  <groupId>org.camunda.community</groupId>
  <artifactId>zeebe-spec-junit-extension</artifactId>
  <version>3.0.0</version>
  <scope>test</scope>
</dependency>
  1. Put the spec and BPMN files in the resource folder (e.g. /src/test/resources/)

  2. Write a JUnit test class like the following (here in Kotlin):

package my.package

import org.camunda.community.zeebe.spec.SpecRunner
import org.camunda.community.zeebe.spec.assertj.SpecAssertions.assertThat
import org.junit.jupiter.params.ParameterizedTest

@ZeebeSpecRunner
class BpmnTest(private val specRunner: SpecRunner) {
 
    @ParameterizedTest
    @ZeebeSpecSource(specDirectory = "specs")
    fun `should pass all tests`(spec: ZeebeSpecTestCase) {

        val testResult = specRunner.runSingleTestCase(spec.testCase)

        assertThat(testResult).isSuccessful()
    }

}
  1. Run the JUnit test class

Junit test results

Usage

Example spec with one test case:

YAML Spec

testCases:
  - name: fulfill-condition
    description: should fulfill the condition and enter the upper task
    instructions:
      - action: create-instance
        args:
          bpmn_process_id: exclusive-gateway
      - action: complete-task
        args:
          job_type: a
          variables: '{"x":8}'
      - verification: element-instance-state
        args:
          element_name: B
          state: activated

Kotlin Spec

val testSpecFulfillCondition =
    testSpec {
        testCase(
            name = "fulfill-condition",
            description = "should fulfill the condition and enter the upper task"
        ) {
            createInstance(bpmnProcessId = "exclusive-gateway")

            completeTask(jobType = "a", variables = mapOf("x" to 8))

            verifyElementInstanceState(
                selector = byName("B"),
                state = ElementInstanceState.ACTIVATED
            )
        }
    }

The Spec

A spec is written in a YAML text format, or alternative in Kotlin code. It contains the following elements:

  • testCases: a list of test cases, each test case contains the following elements
    • name: the (short) name of the test case
    • description: (optional) an additional description of the test case
    • instructions: a list of actions and verifications that are applied in order

Actions

Actions drive the test case forward until the result is checked using the verifications. The following actions are available:

create-instance

Create a new instance of a process.

  • bpmn_process_id: the BPMN process id of the process
  • variables: (optional) initial variables/payload to create the instance with
  • process_instance_alias: (optional) an alias that can be used in following actions and verifications to reference this instance. This can be useful if multiple instances are created.
      - action: create-instance
        args:
          bpmn_process_id: demo
          variables: '{"x":1}'

complete-task

Complete tasks of a given type.

  • job_type: the type or identifier of the job/task
  • variables: (optional) variables/payload to complete the tasks with
      - action: complete-task
        args:
          job_type: a
          variables: '{"y":2}'

throw-error

Throw error events for tasks of a given type.

  • job_type: the type or identifier of the job/task
  • error_code: the error code that is used to correlate the error to an catch event
  • error_message: (optional) an additional message of the error event
      - action: throw-error
        args:
          job_type: b
          error_code: error-1

publish-message

Publish a new message event.

  • message_name: the name of the message
  • correlation_key: the key that is used to correlate the message to a process instance
  • variables: (optional) variables/payload to publish the message with
      - action: publish-message
        args:
          message_name: message-1
          correlation_key: key-1
          variables: '{"z":3}'

cancel-instance

Cancel/terminate a process instance.

  • process_instance: (optional) the alias of a process instance that is canceled. The alias is defined in the create-instance action. If only one instance is created then the alias is not required.
      - action: cancel-instance
        args:
          process_instance: process-1

await-element-instance-state

Await until an element of the process instance is in the given state.

  • state: the state of the element to wait for. Must be one of: activated | completed | terminated | taken
  • element_name: (optional) the name of the element in the process
  • element_id: (optional) as an alternative to the name, the element can be identified by its id in the process
  • process_instance: (optional) the alias of a process instance. The alias is defined in the create-instance action. If only one instance is created then the alias is not required.
      - action: await-element-instance-state
        args:
          element_name: B
          state: activated

Verifications

Verifications check the result of the test case after all actions are applied. The following verifications are available:

process-instance-state

Check if the process instance is in a given state.

  • state: the state of the process instance. Must be one of: activated | completed | terminated
  • process_instance: (optional) the alias of a process instance. The alias is defined in the create-instance action. If only one instance is created then the alias is not required.
      - verification: process-instance-state
        args:
          state: completed

element-instance-state

Check if an element of the process instance is in a given state.

  • state: the state of the element. Must be one of: activated | completed | terminated | taken
  • element_name: (optional) the name of the element in the process
  • element_id: (optional) as an alternative to the name, the element can be identified by its id in the process
  • process_instance: (optional) the alias of a process instance. The alias is defined in the create-instance action. If only one instance is created then the alias is not required.
      - verification: element-instance-state
        args:
          element_name: B
          state: activated

process-instance-variable

Check if the process instance has a variable with the given name and value. If the element name or id is set then it checks only for (local) variables in the scope of the element.

  • name: the name of the variable
  • value: the value of the variable
  • element_name: (optional) the name of the element in the process that has the variable in its scope
  • element_id: (optional) as an alternative to the name, the element can be identified by its id in the process
  • process_instance: (optional) the alias of a process instance. The alias is defined in the create-instance action. If only one instance is created then the alias is not required.
      - verification: process-instance-variable
        args:
          name: x
          value: '1'

no-process-instance-variable

Check if the process instance has no variable with the given name. If the element name or id is set then it checks only for (local) variables in the scope of the element.

  • name: the name of the variable
  • element_name: (optional) the name of the element in the process that has the variable in its scope
  • element_id: (optional) as an alternative to the name, the element can be identified by its id in the process
  • process_instance: (optional) the alias of a process instance. The alias is defined in the create-instance action. If only one instance is created then the alias is not required.
      - verification: no-process-instance-variable
        args:
          name: y
          element_name: B

incident-state

Check if the process instance has an incident in the given state. If the element name or id is set then it checks only for incidents in the scope of the element.

  • state: the state of the incident. Must be one of: created | resolved
  • errorType: the type/classifier of the incident
  • errorMessage: (optional) the error message of the incident
  • element_name: (optional) the name of the element in the process that has the incident in its scope
  • element_id: (optional) as an alternative to the name, the element can be identified by its id in the process
  • process_instance: (optional) the alias of a process instance. The alias is defined in the create-instance action. If only one instance is created then the alias is not required.
      - verification: incident-state
        args:
          error_type: EXTRACT_VALUE_ERROR
          error_message: "failed to evaluate expression 'key': no variable found for name 'key'"
          state: created
          element_name: B

zeebe-spec's People

Contributors

actions-user avatar celanthe avatar chaima-mnsr avatar dependabot-preview[bot] avatar dependabot[bot] avatar github-actions[bot] avatar pihme avatar saig0 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

zeebe-spec's Issues

Capture and verify the task variables

Is your feature request related to a problem? Please describe.
I want to verify the variables a task is called with (i.e. a job worker).

Assuming that the process instance contains some variables. These variables can be modified within the process using variable mappings. I want to make sure that the task worker gets the right variables.

It is different from the workflow-instance-variable verification because the task worker gets also variables from parent scopes.

Describe the solution you'd like
Provide a new verification task-variable to verify the variables a task is called with.

Parameters:

  • name: the name of the variable
  • value: the value of the variable
  • element_name: (optional) the name of the element in the workflow that has the variable in its scope
  • element_id: (optional) as an alternative to the name, the element can be identified by its id in the workflow
  • workflow_instance: (optional) the alias of a workflow instance. The alias is defined in the create-instance action. If only one instance is created then the alias is not required.

Either element_name or element_id must be defined.

Describe alternatives you've considered
No.

Additional context

Build a nice web application

Build a nice web application for users to create and run test cases.

  • create a new test
  • deploy required BPMN resources
  • see existing test cases
  • run one or all test cases
  • see the test result aggregated as list
  • see the test result of a single test case visually on the workflow

I as a test creator can't separate service tasks mocking with the same Zeebe Job Type

Describe the bug
For example, I have such a process:
image
serviceTasksWithSameTypeAndErrorHandling.bpmn.zip

If I will test a happy path, it is pretty easy:

completeTask(jobType = "updateStatus")
// Note: I can have a just single completeTask(jobType = "updateStatus") because we have already created Zeebe Worker, but end users may not know end implementation of completeTask
completeTask(jobType = "updateStatus")

But if I will decide to test an error path I will get a flaky test because the job in the element with error handling could be activated by the worker from completeTask or throwError:

completeTask(jobType = "updateStatus")
// we create a worker with the same time without closing the previous one which leads to the problem
throwError(jobType = "updateStatus", errorCode = "updateStatusError")
completeTask(jobType = "doSmthngWithErr") // time to time we can't reach it
// other stuff
verifications {
    elementInstanceState(
        selector = ElementSelector.byName("Do something with error"),
        state = ElementInstanceState.COMPLETED // sometimes could lead to the error in the test
    )
}

To Reproduce
Steps to reproduce the behavior:

  1. Create or use the process from above
  2. Create a test with error handling verifications
  3. Obtain the test error

Expected behavior
I expect that a such test will never be flaky.

Additional context
I see the 3 options that we have to fix it:

  1. One-time workers. As soon the worker completes/throws an error a job it will close. But it breaks backward compatibility in some cases (this is doubtful because their tests could be just false positive, I need you to argue with me at this point ๐Ÿ™‚ ) and maybe not 100% of the solution (for example, we complete the job, start closing the worker and the next job comes (we could also implement counter for such scenario, but it becomes ugly ๐Ÿ™‚ ))
  2. Add some sort of matcher or selector for completeTask. Again, as I can see this could be not 100% fixed, because in the worst case scenario, all jobs will be executed by one worker, which will never complete the job (and it will be reactivated again, but could be taken again by the "wrong" worker).
  3. Call the GRPC API instead of Zeebe Worker. It is close to the one-time worker but just with JobClient instead of using the Zeebe Worker. For example, as soon as completeJob calling we create a single ActivateJobs command with exactly one job. And complete it via the CompleteJob command. In this case, we could get a situation when a job is not ready for activation, but the ActivateJobs request already came.

So, as you can see I can't find a good solution, maybe you can vote/help me, @saig0? ๐Ÿ™‚

Assertj extension

Is your feature request related to a problem? Please describe.
As a user, I will be very happy if I will have an assertThat method to test the result of ran spec. Right now I have to copy-paste like this:

Assertions.assertThat(testResult.success)
                .describedAs("%s%nDetails: %s", testResult.message, testResult.output)
                .isTrue()

Describe the solution you'd like
This project has the assertThat method with fluent API.

Describe alternatives you've considered
Probably this should be created in a separate project.

New test runner EZE (Embedded Zeebe Engine)

Is your feature request related to a problem? Please describe.
Running tests with Zeebe takes too long. Each test case starts test-containers for Zeebe and ZeeQS that take ~ 10-15 seconds. The time grows fast to minutes by adding a few more test cases.

Describe the solution you'd like
Replace the Zeebe test-container with EZE. The embedded Zeebe engine takes only a few milliseconds to start.

Describe alternatives you've considered
Reduce the time by reusing the Zeebe environment. But the downside is that there is no more isolation between the test cases anymore.

Additional context
The test runner queries the data from ZeeQS to retrieve the state and history. Currently, ZeeQS is also started within a test-container. In order to reduce the time, we need to eliminate this factor, for example, by starting ZeeQS directly, or add namespaces (i.e. multi-tenancy) to ZeeQS.

Related to/depends on camunda-community-hub/zeeqs#169.

Add precondition test for test environment

Is your feature request related to a problem? Please describe.
Currently when the images used in the tests are not properly configured, you get lots of error messages which are hard to decipher.

Describe the solution you'd like
Add a simple precondition test:

  • Can the images be downloaded?
  • Can the images be started?
  • Can Zeebe and Zeeqs ports be reached?
  • Is there communication between Zeebe and ZeeQs?

This test should run at the start and should work like a precondition, i.e. if this test fails than all other tests will be skipped (I think this should be possible with JUnit5)

Upgrade our issue templates to use GitHub issue forms

Is your feature request related to a problem? Please describe.

GitHub has recently rolled out a public beta for their issue forms feature. This would allow you to create interactive issue templates and validate them ๐Ÿคฏ.

Describe the solution you'd like

This repository currently uses the older issue template format. Your task is to create GitHub issue forms for this repository. We can use this repo's issue templates as a reference for this PR.

Add action to wait for process instance to be created

Is your feature request related to a problem? Please describe.
Currently I can only interact with process instances that were created by the test itself. I do not know about process instances that were created by the engine (e.g. event sub-processes, call activities).
What is needed is a way to:

  • wait until a process instance of a given ID has been created
  • assign some kind of (indexed) alias to identify the created process

Describe the solution you'd like
Something along those lines:

     - action: await-instance
        args:
          bpmn_process_id: sub-process
          workflow_instance_alias: C

This still doesn't address how to identify the third process instance being created, but it would be a step forward.

Describe alternatives you've considered

  • Add a service task at the begin of a subprocess. The job activation would be a proxy for the workflow creation. However, I am not aware of any way to get the process Id and feed it back to the test enigne; also it would be not clear at all in the test spec what is going on
  • Some way of using an action to query the tree of nested processes and then based on the query result move forward; didn't really make much sense, to be honest.

Generate visual test output

After a test run, I can see visually how the workflow instance was executed.

For example, by generating an HTML page that shows the workflow instance execution using bpmn-js and the test output.

Does not work with tescontainer version 1.16.0 or postgresql testcontainer 1.16.0

Describe the bug
In my gradle project I am using postgresql:1.16.0, and when I use the bmpn-spec version 2.0.0 which uses the testcontainer version 1.15.3, then tests fails with some issue, but the end of the stack it says


java.lang.RuntimeException: Failed to start the Zeebe container
java.lang.RuntimeException: java.lang.RuntimeException: Failed to start the Zeebe container
	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
	at java.base/java.util.concurrent.ForkJoinTask.getThrowableException(ForkJoinTask.java:600)
	at java.base/java.util.concurrent.ForkJoinTask.reportException(ForkJoinTask.java:678)
	at java.base/java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:737)
	at java.base/java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:159)
	at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateParallel(ForEachOps.java:173)
	at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233)
	at java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497)
	at java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:661)
	at io.zeebe.bpmnspec.runner.zeebe.ZeebeEnvironment.setup(ZeebeEnvironment.kt:42)
	at io.zeebe.bpmnspec.runner.zeebe.ZeebeTestRunner.beforeEach(ZeebeTestRunner.kt:29)
	at io.zeebe.bpmnspec.SpecRunner.runTestCase(SpecRunner.kt:73)
	at io.zeebe.bpmnspec.SpecRunner.runSpec(SpecRunner.kt:40)
	at fi.elisa.workflowenginepoc1.WorkflowTest.process starts and PAC get activated(WorkflowTest.kt:36)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688)
	at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
	at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
	at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
	at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84)
	at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
	at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
	at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
	at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75)
	at org.gradle.api.internal.tasks.testing.junitplatform.JUnitPlatformTestClassProcessor$CollectAllTestClassesExecutor.processAllTestClasses(JUnitPlatformTestClassProcessor.java:99)
	at org.gradle.api.internal.tasks.testing.junitplatform.JUnitPlatformTestClassProcessor$CollectAllTestClassesExecutor.access$000(JUnitPlatformTestClassProcessor.java:79)
	at org.gradle.api.internal.tasks.testing.junitplatform.JUnitPlatformTestClassProcessor.stop(JUnitPlatformTestClassProcessor.java:75)
	at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.stop(SuiteTestClassProcessor.java:61)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
	at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
	at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
	at com.sun.proxy.$Proxy2.stop(Unknown Source)
	at org.gradle.api.internal.tasks.testing.worker.TestWorker.stop(TestWorker.java:135)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
	at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:182)
	at org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:164)
	at org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:414)
	at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
	at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
	at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.RuntimeException: Failed to start the Zeebe container
	at io.zeebe.bpmnspec.runner.zeebe.ZeebeEnvironment.startZeebeContainer(ZeebeEnvironment.kt:61)
	at io.zeebe.bpmnspec.runner.zeebe.ZeebeEnvironment.access$startZeebeContainer(ZeebeEnvironment.kt:12)
	at io.zeebe.bpmnspec.runner.zeebe.ZeebeEnvironment$setup$1.invoke(ZeebeEnvironment.kt:40)
	at io.zeebe.bpmnspec.runner.zeebe.ZeebeEnvironment$setup$1.invoke(ZeebeEnvironment.kt:40)
	at io.zeebe.bpmnspec.runner.zeebe.ZeebeEnvironment.setup$lambda-0(ZeebeEnvironment.kt:42)
	at java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
	at java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
	at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
	at java.base/java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:290)
	at java.base/java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:746)
	at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
	at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)
	at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)
	at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)
	at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
Caused by: org.testcontainers.containers.ContainerLaunchException: Container startup failed
	at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:334)
	at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:315)
	at io.zeebe.bpmnspec.runner.zeebe.ZeebeEnvironment.startZeebeContainer(ZeebeEnvironment.kt:56)
	... 14 more
Caused by: org.rnorth.ducttape.RetryCountExceededException: Retry limit hit with exception
	at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:88)
	at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:327)
	... 16 more
Caused by: org.testcontainers.containers.ContainerLaunchException: Could not create/start container
	at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:523)
	at org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:329)
	at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81)
	... 17 more
Caused by: java.lang.RuntimeException: java.lang.IllegalArgumentException: Requested port (9600) is not mapped
	at org.rnorth.ducttape.timeouts.Timeouts.callFuture(Timeouts.java:68)
	at org.rnorth.ducttape.timeouts.Timeouts.doWithTimeout(Timeouts.java:60)
	at org.testcontainers.containers.wait.strategy.WaitAllStrategy.waitUntilReady(WaitAllStrategy.java:53)
	at org.testcontainers.containers.GenericContainer.waitUntilContainerStarted(GenericContainer.java:923)
	at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:466)
	... 19 more
Caused by: java.lang.IllegalArgumentException: Requested port (9600) is not mapped
	at org.testcontainers.containers.ContainerState.getMappedPort(ContainerState.java:153)
	at java.base/java.util.Optional.map(Optional.java:265)
	at org.testcontainers.containers.wait.strategy.HttpWaitStrategy.waitUntilReady(HttpWaitStrategy.java:177)
	at org.testcontainers.containers.wait.strategy.AbstractWaitStrategy.waitUntilReady(AbstractWaitStrategy.java:35)
	at org.testcontainers.containers.wait.strategy.WaitAllStrategy.waitUntilNestedStrategiesAreReady(WaitAllStrategy.java:61)
	at org.testcontainers.containers.wait.strategy.WaitAllStrategy.lambda$waitUntilReady$0(WaitAllStrategy.java:54)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:834)


To Reproduce
Steps to reproduce the behavior:
I think you can use just the testcontainer:1.16.0 and see if it works. If cannot reproduce then I can create a sample.

Expected behavior
Works without errors ?

Rename "complete-task" to "start-worker"

Is your feature request related to a problem? Please describe.
The action "complete-task" is misleading. It sounds as if I could complete a single activated job. However, what is actually happening is that a worker is started which is waiting for jobs to get activated and then completing all of them.

Describe the solution you'd like
I would like to have a name that is clear.

Config file not necessary when exporter is enabled by default

Is your feature request related to a problem? Please describe.
Currently, this repo has a hidden dependency on zeebe-hazelcast-exporter: it assumes a certain name and location for the exporter jar. This coupling between the two repos is not necessary if the zeebe-hazelcast-exporter has the exporter enabled by default.

Describe the solution you'd like
This repo should use the Zeebe with Hazelcast exporter image as is.

Describe alternatives you've considered

  • I thought about whether this repo could assemble the Zeebe with Hazelcast exporter but found this too complicated as well

Rename project to zeebe-spec

We should rename the project from bpmn-spec to zeebe-spec.

The main reasons are:

  • there is no alternative test runner to Zeebe after more than 2 years
  • allows being more specific to Zeebe for spec actions and verifications
  • allows covering not only BPMN but also DMN (in the future)
  • allows reducing the abstraction layers and simplify the code

The renaming of the project would lead to a new major version.

Allow to parameterize Zeebe image to use

Is your feature request related to a problem? Please describe.
Currently the Zeebe image, against which the tests are run, is hard coded.

Describe the solution you'd like
It should be possible to specify the image of Zeebe to test against.

Describe alternatives you've considered

  • I considered whether this repo could also combine the Zeebe image with the hazelcast exporter. But that wouldn't expand the responsibility of this module

Job workers don't stop in reusable Zeebe environment

Describe the bug
With the Zeebe worker, I can use a reusable environment to reduce the test execution time.

In a specification, I can have test cases that define a complete-task for the same task but with different variables. Currently, it can happen that the task is completed with the variables of the previous test case.

To Reproduce

Run the following spec in a reusable environment (e.g. io.zeebe.bpmnspec.junit.BpmnSpecExtensionReuseEnvironmentTest)

resources:
  - exclusive-gateway.bpmn

testCases:
  - name: condition-flow
    description: the condition is fulfilled
    actions:
      - action: create-instance
        args:
          bpmn_process_id: exclusive-gateway
          workflow_instance_alias: wf-1
      - action: complete-task
        args:
          job_type: a
          variables: '{"x":8}'

    verifications:
      - verification: element-instance-state
        args:
          element_name: B
          state: activated
          workflow-instance: wf-1

  - name: default-flow
    description: take the default flow
    actions:
      - action: create-instance
        args:
          bpmn_process_id: exclusive-gateway
          workflow_instance_alias: wf-2
      - action: complete-task
        args:
          job_type: a
          variables: '{"x":3}'

    verifications:
      - verification: element-instance-state
        args:
          element_name: C
          state: activated
          workflow-instance: wf-2

Expected behavior
The task worker from the previous test case is closed and doesn't influence the next test case.

Additional context

Establish EZE as the default test runner

A new EZE test runner was added with #212. It is much faster than the previous Zeebe test runner.

Since the difference in the spec execution time is so significant (< 1s vs 25s), we should make the EZE test runner the default runner.

We could also replace the Zeebe test runner completely. There is not much reason to use the Zeebe runner anymore. However, we should keep the option to use the previous runner (i.e. testcontainers with Zeebe + ZeeQS image) if needed.

In combination with #215, we could reduce more abstractions by removing the Zeebe runner. As a result, we would have only one test runner that can be configured and that is integrated into the core module directly. The JUnit integration could also benefit from only one runner.

Add action "wait-for-state-transition"

Is your feature request related to a problem? Please describe.
Not really a problem, more an inconvenience. If I wanted to write a test to wait that a certain timer has been fired twice, currently this would mean:

  • await-element-instance-state - activated
  • await-element-instance-state - completed
  • await-element-instance-state - activated
  • await-element-instance-state - completed

This makes it difficult to understand the intention of the test.

A shorthand action could be introduced to first wait for one state, than another state. The shorthand could also check that the transition makes sense (e.g. activated to completed makes more sense then completed to activated)

Describe the solution you'd like
I'd like to have an action with the following spec:

 - action: await-element-instance-state-transition
        args:
          element_name: B
          from-state: activated
          to-state: completed

Additional context
Not sure how relevant this will be for the tests, but this could also be a place to specify for the test to wait for the n-th state transition.

support invoking as a cli

Is your feature request related to a problem? Please describe.
I am frustrated that my bpmn definitions need to live as junit resources. We are interested in building an independent sdlc for our bpmn deployments, but to reuse this testing framework we'd have to modify the junit tests as well.

Describe the solution you'd like
I'd like to have a separate CLI that took parameters for not only the bpmn resources but all the connectivity information as cli args so that I could execute as part of a pipeline like github actions or gitlab ci.

Describe alternatives you've considered
We've considered just housing all of our bpmn in a kotlin project and having our bpmn deployment look at a resources directory.

Additional context
N/A

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.