Code Monkey home page Code Monkey logo

workflow-api's Introduction

Workflow API

Dependencies:

node -v
v18.12

npm -v
9.2.0

Environment variables

Add a .env file with the following variables:

  • NODE_ENV (suggested value = docker)
  • KOA_LOG_LEVEL (error, warn, info, http, verbose, debug, silly)
  • PORT (default=3000)

DATABASE CONNECTION

  • KNEX_ENV (text, docker, dockerLocal, prod)
  • POSTGRES_PORT
  • POSTGRES_HOST
  • POSTGRES_DATABASE
  • POSTGRES_USER
  • POSTGRES_PASSWORD
  • DB_MAX_POOL_CONNECTION (default=10)

TOKEN CONFIGURATION

The default configuration is set to use a own generated JWT token. If an external token is to be used, the JWT_ variables need to be passed.

  • JWT_KEY (default=1234)
  • JWT_ALG (default=HS256, if set to RS256, the application will convert the JWT_KEY to a public certificate.)
  • JWT_PASSTHROUGH (boolean, default=true)
  • JWTEXTRA_KEYS (_optional, defines extra properties from token payload that should be sent to process' actor_data)
  • JWT_PATH_ACTOR_ID (default=actor_id, specifies the path to the actor id in the token payload)
  • JWT_PATH_CLAIMS (default=claims. Claims value should always be an array)
  • JWTPATH_SESSION_ID (_optional, default=session_id)

MQTT CONFIGURATION

  • MQTT (bool)
  • MQTT_HOST
  • MQTT_PORT
  • MQTT_PATH
  • MQTT_PROTOCOL
  • MQTTUSERNAME (_optional, required for wss connections)
  • MQTTPASSWORD (_optional, required for wss connections)
  • MQTT_NAMESPACE (if present, this string will the prepended to any topic published)

AMQP CONFIGURATION

  • AMQP (bool)
  • BROKER_HOST
  • BROKER_QUEUE
  • BROKER_USERNAME
  • BROKER_PASSWORD

KAFKA CONFIGURATION

  • KAFKA (bool)
  • KAFKA_BOOTSTRAP_SERVER
  • KAFKA_SEC_PROTOCOL
  • KAFKA_SASL_MECHANISMS
  • KAFKA_CLUSTER_API_KEY
  • KAFKA_API_SECRET
  • KAFKA_SESSION_TIMEOUT

EVENT Nodes CONFIGURATION (also WEM CONFIGURATION)

  • WORKFLOW_EVENTS_BROKER (Values: KAFKA|MQTT|AMQP. If not defined, it won't send)
  • WORKFLOW_EVENTS_NAMESPACE

CHOOSING BROKERS TO USE (AMQP OR MQTT)

  • ACTIVITY_MANAGER_BROKER
  • PROCESS_STATE_BROKER
  • ENGINE_LOGS_BROKER

ACTIVITY_MANAGER CONFIGURATION

  • ACTIVITY_MANAGER_SEND_ONLY_ON_CREATION (bool)

ENGINE CONFIGURATION

  • ENGINE_LOG_LEVEL (default=error)
  • ENGINE_HEARTBEAT (true/false string, turns on-off engine heartbeat)
  • HEART_BEAT (integer, default=1000, interval between beats in ms)
  • PUBLISH_STATE_EVENTS (true/false string, enables states to be published to message broker)
  • PUBLISH_ENGINE_LOGS (true/false string, enables engine logs to be published to message broker)
  • PUBLISH_SERVER_LOGS (true/false string, enables api logs to be published to message broker)
  • MAX_STEP_NUMBER (integer, maximum number of steps for a process)
  • MAX_CONTENT_LENGTH (integer, max content length for response on http node calls)
  • MAX_BODY_LENGTH (integer, max body length for response on BasicAuth nodes)
  • HTTP_TIMEOUT (integer, timeout in ms for BasicAuth nodes)
  • TIMER_BATCH (integer, default=40)
  • ORPHAN_BATCH (integer, default=40)

TIMER CONFIGURATION

Timer management can be handled outside the heartbeat, by a external Timer Worker running timers using an Redis Queue using bullMQ. To enable this option, the engine must be configured to publish timers on the queue thru 3 variables that configures the bullMQ.

  • TIMER_QUEUE (string)
  • TIMER_HOST (URL)
  • TIMER_PORT

HEALTHCHECK

Healthcheck route (GET / or GET /healthcheck), by default, checks the router and db connection. The server can be configured to evaluate the number of ready timers (timers expired but not yet triggered) to access engine health. This setting should not be enabled if the timers area being handled by the timer worker.

  • MAXREADY_TIMERS (_optional, integer, defines the amount of ready timers before the server is declared unhealthy)

MONITORING

  • OTEL_ENABLED (bool, activates Open Telemetry config)
  • OTEL_SERVICE_NAME (string)
  • OTEL_COLLECTOR_URL

NEW RELIC CONFIGURATION

  • NEW_RELIC_ENABLED (bool, activates New Relic config for direct or OTEL Monitoring.)
  • NEW_RELIC_API_KEY (required if New Relic is enabled)
  • NEW_RELIC_NO_CONFIG_FILE (recommended=true)
  • NEW_RELIC_LOG (recommended=stdout)
  • NEW_RELIC_LOG_ENABLED (recommended=true)
  • NEW_RELIC_LOG_LEVEL (recommended=info)
  • NEW_RELIC_DISTRIBUTEC_TRACING_ENABLED (recommended=true)
  • NEW_RELIC_APPLICATION_LOGGING_ENABLED (recommended=true)
  • NEW_RELIC_APP_NAME

POSTMAN

For Newman test runs

  • POSTMAN_API_KEY
  • POSTMAN_TEST_COLLECTION
  • POSTMAN_ENVIRONMENT

Run the project on docker:

To run docker, just run

docker-compose up

Make sure ports 3000 and 5432 are free to use on your localhost.

To run the tests, you may use the command below:

docker-compose run -T app ./scripts/run_tests.sh

For Windows users, comment the script command from the docker-compose.yml and use the bash one.

Exploring and executing the API

To explore all possible routes, go to http://localhost:3000/swagger

If you change the base url, change it as well in the openapi3.yaml file.

If you wish to use a third-party program, such as Insomnia or Postman, just import the openapi3.yaml file and all the routes will be shown. If you use Postman, I would recommend changing the Folder organization to Tags after selecting the file to be imported.

Logging

There are 2 sources of logs: engine and the API itself. Both uses winston library to manage formats and transports.

The events emitted by the engine can by logged by the engine itself or managed by the API app.

You can set the log level from the engine using the ENGINE_LOG_LEVEL variable. At the time you cannot turn them completely off.

The log levels use the scale below:

silly -> debug -> verbose -> http -> info -> warn -> error

The API app uses the same log levels, but they are managed by the KOA_LOG_LEVEL variable and has a default value of info.

Notice that at default configuration, error events are logged twice.

Engine events will also be sent to a /logs topic to mqtt if the both MQTT and PUBLISH_ENGINE_LOGS are set to true.

MQTT

The default compose will set up a postgres database, an hive MQTT server and the app itself.

During app initialization, is the MQTT is true, the flowbuild will try to connect to the MQTT server.

Be sure to provide the following parameters as environment variables

  • MQTT_HOST (localhost if you a running on docker)
  • MQTT_PORT (8000)
  • MQTT_PATH (/mqtt)

The following topics will be used:

  • /logs for engine logs
  • /process/:processId/state for each process state created
  • /process/:processId/am/create for each activity_manager created
  • /actor/:actorId/am/create for each activity_manager created, if an actor_id property exists in activity_manager input
  • /session/:sessionId/am/create for each activity_manager created, if an session_id property exists in activity_manager input

Tests

You can run unit tests by running npm run tests.

If you would like to test the routes itself, you can use Newman to do that, by running the command.

newman run postman \newman\tests.postman_collection.json -e postman\newman\local_environment.json

Bibliography

how to prepare for windows

how to install docker on windows hot to install wsl2

how to prepare for linux

how to install docker on linux distros

workflow-api's People

Contributors

bot-flowbuild avatar dependabot[bot] avatar felipegdm avatar flowbuild-bot avatar gharamura avatar imagure avatar jorgehugo95 avatar junrongzhu1 avatar kaio-fdte avatar matheus-fdte avatar mrgalopes avatar pcasari avatar pedropereiraassis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

workflow-api's Issues

Debug de validação de raia

Questão: Mensagem pouco descritiva [LANE] "ERROR WHILE EVALUATING LANE RULE!" em caso de exceção. Silencioso em caso de false.

Sabido: O Flowbuild trabalha com eventos. Um evento é composto por [event, message, data]. No caso de exceção da função na rule, o evento é LANE.ERROR, a mensagem é [LANE] "ERROR WHILE EVALUATING LANE RULE!" e data é o error stack.

Sugestão: Implementar um listener de eventos do Flowbuild, pegar esse evento e salvar a data dele. Para capturar os dados desse evento, precisa alterar a API para escutar esse evento e tomar ações em cima disso. Existem 2 alternativas, cria um listener na API através de:

  1. fazer um printf no console dos dados;
    
  2. persistir no banco.
    

Add (curr_node_id, bag, error) to process table

There is no community available except the FlowBuild documentation.

What would you like to be added:
Columns (curr_node_id, bag, error) to process table

Why is this needed:
Avoids join process_state table to process. It helps Analytics.

  • [] this issue need to update on docs? if yes where?
    Wherever the documents of database models are.

  • [] this issue interact with a plugin or application? if yes spe
    No.

Anything else we need to know:
No

Add (prev_node_id, flow_path) to process table

Related to this issue: #41

What would you like to be added:
Previous id node_id on current flow path [String] and flow path (either a json with template {'$step_number': '$node_id'} or ordered Array[node_id]: former is more robust then the latter) to table process

Why is this needed:

  • Debug: It is easier then either to join or filter with table process_state;

  • Analytics: It is easier then to join or to filter with table process_state;

  • [] this issue need to update on docs? if yes where?
    Wherever Workflow Database Model documentation is.

  • [] this issue interact with a plugin or application?
    Workflow Database Model: create migrations and update table

The automated release is failing 🚨

🚨 The automated release from the master branch failed. 🚨

I recommend you give this issue a high priority, so other packages depending on you can benefit from your bug fixes and new features again.

You can find below the list of errors reported by semantic-release. Each one of them has to be resolved in order to automatically publish your package. I’m sure you can fix this 💪.

Errors are usually caused by a misconfiguration or an authentication problem. With each error reported below you will find explanation and guidance to help you to resolve it.

Once all the errors are resolved, semantic-release will release your package the next time you push a commit to the master branch. You can also manually restart the failed CI job that runs semantic-release.

If you are not sure how to resolve this, here are some links that can help you:

If those don’t help, or if this issue is reporting something you think isn’t right, you can always ask the humans behind semantic-release.


Cannot push to the Git repository.

semantic-release cannot push the version tag to the branch master on the remote Git repository with URL https://x-access-token:[secure]@github.com/flow-build/workflow.git.

This can be caused by:


Good luck with your project ✨

Your semantic-release bot 📦🚀

Route to abort processes by workflow_name and account_id

What would you like to be added:

A route that receives as input workflow_name and account_id, and abort all processes related to those keys

Why is this needed:

If i have an application with a lot of pending processes which i intend to abort, i need to abort them manually one by one.

  • this issue need to update on docs? if yes where?
    Routes documentation

  • [] this issue interact with a plugin or application? if yes spe

Anything else we need to know:

Corrigira validação do tipo de nó para um Flow Node

O que acontece:
Ao escrever um node spec, caso o nó seja um Flow Node, se a chave 'type' estiver escrito da forma 'flow', a publicação do workflow falha. Caso a chave 'type' esteja 'Flow', a publicação é feita corretamente.

O que é esperado:
Espera-se que o 'type' funcione corretamente, seja seu valor 'flow' ou 'Flow'

Como reproduzir:
{ "name": "TESTE", "description": "teste", "blueprint_spec": { "lanes": [ { "id": "1", "name": "the_only_lane", "rule": [ "fn", [ "&", "args" ], true ] } ], "nodes": [ { "id": "START-PROCESS", "type": "Start", "name": "Start node", "parameters": { "input_schema": {} }, "next": "FLOW-TEST", "lane_id": "1" }, { "id": "FLOW-TEST", "name": "Flow-test", "next": { "sucesso": "END", "default": "END" }, "type": "flow", "lane_id": "1", "parameters": { "input": { "mensagem": { "$ref": "result.data.status" } } } }, { "id": "END", "name": "END", "next": null, "type": "Finish", "lane_id": "1" } ], "prepare": [], "environment": {}, "requirements": [ "core" ] } }

[Documentation] Blueprint publish with timeout

Hi,

I followed procedurally the tutorial steps (here). My brain compiler stopped . I performed the following actions:

  1. Create a blueprint first_bp.json with the example here on $BP_DIR;
  2. Open a terminal, type cd $BP_DIR and press enter;
  3. Edit the command below with the generated token on token step e a blueprint of step 1;
  curl --location --request POST '3.82.154.55:3000/workflows' \
--header 'content: application/json' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer {{my token}}' \
--data-raw '{my blueprint}'
  1. It times out;

I suspect it is a silly mistake. Here are some of them:

  1. {{My token}} must be between simple or double quotes;
  2. Host "localhost" instead of "3.82.154.55:3000/".

Any help is welcome.

Node docker container won't find postgres

What happened:
When running docker-compose up or docker-compose run -T app ./scripts/run_tests.sh, the node container can't connect to the postgres container. For this reason, the tests won't run, and database related operations, such as migrating or seeding, won't work. The containers start normally and postgres is accessible from the host machine.
What you expected to happen:
The node container should be able to connect to postgres.
How to reproduce it (as minimally and precisely as possible):
git clone
cd workflow
docker-compose up or docker-compose run -T app ./scripts/run_tests.sh
Anything else we need to know?:
Adding POSTGRES_HOST=postgres to .env.docker seems to fix this.
Environment:

  • version:
  • OS installed on: Manjaro Linux
  • User OS & Browser or mobile:
  • Plugins:
  • Others:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.