Code Monkey home page Code Monkey logo

pipelines's Introduction

image

Pipelines

Pipeline UI screenshot

Pipelines is a simple tool with a web UI to manage running tasks. It supports running tasks manually through a Web UI or automatically via webhooks.

Pipelines is composed of three components:

  • Web UI : User interface that allows users to run tasks, inspect logs and show the task details. It also support username/password authentication.
  • Pipelines API : This is the backend for the Web UI and webh
  • Pipelines library : This component includes core logic for running pipelies such as reading task definitions, handling logging and running the pipelines.

Pipelines is primarily developed to run on Linux / MacOS. Windows support is not available at the moment.

Installation

Requirements:

  • python 3.6 (other versions not tested)
  • pip

Note for python2.7:

we drop support for python2.7 as it reached EOL. for those who still want to use pipelines in some legacy environment. use tag 0.0.15 or branch legacy-for-python2.

Base install

pip install pipelines

Or get the latest dev version from Github and run pip install . from within the cloned repo. Or run pip directly from git pip install git+https://github.com/Wiredcraft/pipelines@master.

Configuration

Pipelines runs solely on files. No database is currently required. All the pipelines, the logs of each run and various temporary files are stored under the workspace folder.

Workspace is a folder that needs to be specified when running pipelines.

mkdir ~/pipelines_workspace

Drop your pipelines files (see format below) directly at the root of this folder.

Run standalone

Start the API with the following:

pipelines server --workspace ~/pipelines_workspace --username admin --password admin

You may want to specify a different binding IP address (default: 127.0.0.1) or different port (defaut: 8888). Refer to the pipelines --help for additional parameters.

You can now access pipelines at http://localhost:8888

How to run as a daemon

Create a dedicated user to run pipelines

# Create a pipelines user
useradd -m -d /var/lib/pipelines -s /bin/false pipelines

# Create the workspace folder (optional)
mkdir /var/lib/pipelines/workspace
chown -R pipelines. /var/lib/pipelines

# Create a SSH key pair (optional)
sudo -u pipelines ssh-keygen

You may want to rely on supervisord to run the API.

# Ubuntu / Debian
apt-get install supervisor

# CentOS / RedHat (to confirm)
yum install supervisord

Copy and adapt de config file from etc/supervisor/pipelines.conf to /etc/supervisor.

# Update and reload supervisord
supervisorctl reread
supervisorctl update
supervisorctl start pipelines

Access the web interface at http://localhost:8888

Additionaly you may want to use nginx as reverse proxy as well. See sample config from etc/nginx.

Authentication

Static authentication

You can define a static admin user by specifying the following options when running pipelines:

--username ADMIN_USER
--password ADMIN_PASS

Github Oauth

This is an experimental feature

You can add oauth support from Github to allow teams to access pipelines. You will need to set it by using environment variables for the Oauth Apps, and the --github-auth to limit teams access.

To get your OAUTH Key and Secret: - Register new application in Github: https://github.com/settings/applications/new - Only field on that form that is important is the Authorization callback URL. This should point to your pipelines, for example if you run it locally it should be http://localhost:8888/ghauth. The last part (/ghauth) always stays the same. - Copy the Client ID and Client Secret from that page.

To start the pipelines server with Github OAuth enabled.

GH_OAUTH_KEY=my_oauth_app_key \
GH_OAUTH_SECRET=my_super_secret \
pipelines server [--options] --github-auth=MY_ORG/MY_TEAM[,MY_ORG/ANOTHER_TEAM]

Note: If you use Github Oauth, you will not be able to use static authentication.

Pipelines file format

Pipeline definition file uses YAML syntax. A few examples below. Pipelines files are meant to be put at the root of your workspace.

Simple example

This is a very basic pipeline definition. Save it in your workspace within a .yaml file (e.g. WORKSPACE/example-pipeline.yaml). It does ... nothing really useful TBH.

---
# Pipeline definitions are standard yaml and you can include comments inside

# Variables are exposed to all actions through {{ varname }} syntax.
vars:
    code_branch: dev

# Triggers define the automated ways to run the task. In addition to manual execution
# through the UI, only webhook is supported for now.
triggers:
    - type: webhook

# Actions are the steps that are run for this pipeline. The default action plugin is bash,
# but you can use others by defining the "type" field.
actions:
    - 'echo "Starting task for {{ code_branch }}"'
    - name: 'My custom name step'
      type: bash
      cmd: "echo 'less compact way to define actions'"
    - 'ls -la /tmp'

Vars

The vars section of the pipeline definition defines variables that will then be available in any of the actions.

vars:
    my_var: something

actions:
    - echo {{ my_var }}

You can then use the variables as seen above.

Note:

  • You may have to quote " your vars to respect the YAML format.

Prompts

You can prompt users to manually input fields when they run the pipeline through the web-UI. To do this add a prompt section to your pipeline definition. The prompt fields will override the variables from the vars section.

You can alternatively provide a list of acceptable values; the prompt will then appear as a select field and let you choose from the available values

vars:
    # This is the default value when triggered and no prompt is filled (e.g. via webhook)
    my_var: default_no_prompt

prompt:
    # This is the default value when triggered via the web UI
    my_var: default_with_prompt

    # This will appear as a select field
    my_var_from_select:
        type: select
        options:
            - value1
            - value2

actions:
    # This will display:
    #    "default_no_prompt" when call via webhook
    #    "default_with_prompt" when call via UI but keeping the default
    #    "other" when call via UI and "other" is inputted by the user
    - echo {{ my_var }}

    # Depending on the selected value, will display value1 or value2
    - echo {{ my_var_from_select }}

Actions

Default actions use the bash plugin and will execute command as if they were shell commands.

Other actions can be used by specifying another type. Supported types currently are:

  • bash: run bash command.
  • python: write inline script or run python script inside a virtualenv.
  • slack: send message to Slack.

bash

See example above.

python

The python plugin allows to run python scripts or inline python code.

actions:
  - type: python
    script: |
      import json
      a = {'test': 'value', 'array': [1,2,3]}
      print(json.dumps(a, indent=2))
  - type: python
    virtualenv:  /opt/venvs/my_env
    file: '/tmp/some_script.py'

Explanation of the fields:

  • script: inline python code to be run against the python interpreter.
  • file: run a python script.
  • virtualenv: run the python code (inline or file) inside a virtualenv.

Note:

  • The path of either virtualenv folder or file need to exist and be on the server. It is currently set relatively to the CWD where the Pipelines api / UI is running from.

slack

The slack plugin allows sending messages over to Slack (e.g. pipelines execution status).

vars:
    slack_webhook: https://hooks.slack.com/services/SOME/RANDOM/StrIng
    name: some_name

actions:
    - type: slack
      message: 'Deployment finished for project {{ name }}.'
      always_run: true

Explanation of fields:

  • type: tells Pipelines to execute the action through the slack plugin.
  • always_run: ensure the action is run all the time - even if a former action failed.
  • message: is the message to send to Slack.

Note:

  • The slack plugin require a slack_webhook vars defined in the vars section of the pipeline.

Slack Hooks URL are defined via the Incoming WebHooks app (Slack API details here).

Triggers

Webhooks

If you want to run your pipeline by triggering it through a webhook you can enable it in the triggers section.

triggers:
    - type: webhook

If you open the web-UI you can see the webhook URL that was generated for this pipeline in the "Webhook" tab. You can for example configure GitHub repository to call this url after every commit.

You can access the content of the webhook content in the actions in the webhook_content variable; e.g. echo {{ webhook_content.commit_id }}

Note:

  • You need to send the message via POST as application/json Content-Type.
  • Documentation is coming to explain how to use the content of the data sent through the hook.

Advanced Templates

Pipelines uses Jinja2 to do variables replacement. You can use the whole set of builtin features from the Jinja2 engine to perform advanced operations.

prompt:
    stuff:
        type: select
        options:
            - good
            - bad

actions:
    - name: Print something
      type: bash
      cmd: |
          {% if stuff == 'good' %}
            echo "Do good stuff..."
          {% else %}
            echo "Do not so good stuff..."
          {% endif %}

    - name: Use builtin filters
      type: bash
      # Will display 'goose' or 'base'
      cmd: echo {{ stuff | replace('d', 'se') }}

Dirty line by line setup

TODO: Make a real setup script / one-liner script ... and not Debian only ...

  • apt-get update
  • apt-get upgrade
  • apt-get install python-pip git
  • pip install virtualenv
  • virtualenv pipelines
  • source pipelines/bin/activate
  • pip install pipelines
  • mkdir ~/pipelines_workspace
  • pipelines server --workspace ~/pipelines_workspace --username admin --password admin

Docker

Note: Not heavily tested.

docker run -d boratbot/pipelines

Roadmap

No definitive roadmap for the moment, mainly focusing on having a lean code base (heavy refactoring to come).

Among the possible features:

  • Improved web UI & features
  • Better webhook management
  • Better management of the tasks
  • CLI
  • Toolbar
  • Improved Auth
  • etc.

pipelines's People

Contributors

dependabot[bot] avatar ecutdavid avatar guinan34 avatar hunvreus avatar juhas avatar kaleocheng avatar pallxk avatar paulvollmer avatar sp3c73r2038 avatar woodpig07 avatar xavierchow avatar xuqingfeng avatar zbal avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pipelines's Issues

Bug: Last log lines of run are not fetched

When pipeline status is changed to finished the code stops polling for latest logs. This leads to situation where last log lines may not be fetched. After task status changes to "finished" we should fetch the logs one last time.

Better timezone management

Currently it takes the time of the server (?) - and we can't set the timezone to be offering the current time. See what to do.

Dockerize pipelines

Need a nice Dockerfile + ability to configure the workspace + various config via env var.

Issue:

  • gotta keep it light
  • gotta find a way to still allow misc commands (e.g. ansible) while not available by default in the container
    • either via doc to overload the repo
    • or by allowing this container to spawn ... other containers to let them run their own commands ...

Upgrade to webpack 2

We have to upgrade the client to use webpack 2 in order to have smooth npm install and being able to use modern modules.

Sort tasks

Currently, the returned tasks are in arbitrary order.
I think it would be better to sort them.

Fix error handling on template errors

We changed the templating to use Jinja2 so it will throw Jinja errors now. We should update the error handling code here to catch a jinja base error class (if possible).

Include time in /api/pipelines reply

Either we include the time in the /api/pipelines response for the various runs, or we don't but implicitly sort the runs per time (start / end). But we need some way to know what are the latest runs.

see #9

Improve error handling on the UI

Currently the UI doesn't represent http error responses properly. We should check that we have error handling at least for:

  • Getting pipelines fails
  • Running pipeline fails
  • Getting logs fails
  • Getting triggers fails

GitHub authentication

We are adding a GitHub OAuth authentication. User can select teams that are allowed to access the UI.

Command line argument: --github-auth=Acme/Team1,Acme/Team2

If user does not belong to allowed teams he gets redirected back to login page.

Improve test code

Currently the test code just prints the responses to console which has to be checked manually.
We need to improve it with assert modules.

Need more data in the result from the execution of each step

Currently running a bash command only return the status (0,1) and the simple message "Bash command completed" (or alike).

We need a better convention for handling executions.

E.g. Ansible requires every module to return a dict with a few fields ("status", "changed", etc.) and each module (in our case plugin) would be able to overload this object and include more data (e.g. stdout, stderr for a bash command)

Currently the response is not very useful (at least for bash) as we can't really use the result from a previous task in the next(s).

On the fly update of the pipelines execution context within tasks

The pipeline_context is only instantiated on start, updated via the prompt / webhook, and on completion of the execution of a task.

it currently is not possible to update that context within tasks

Similarly - since tasks are rendered via jinja at the task level it is not possible to use set xxx = sdds within one task and expect to use that value at the next task... (it is not per-se the context of the pipelines, but that would have been a way to go around).

{{ all_variables.json() }}

Would be a graet way to retrieve all variables (prompt/vars combined) as JSON, say for example to pass the information to a REST endpoint.

Improve the release process for UI

Currently to update UI we need to:

  • in /app run : NODE_ENV=production npm run dist
  • git add git add ../pipelines/api/app_dist/index.html and git add ../pipelines/api/app_dist/(bundlename)
  • commit

We should aim for:

  • Don't have the hash in bundle name
  • Add an exception to .gitignore for that file.

Add task-id's to GET pipelines

Add the task-id's to the GET /pipelines

{
  pipelines: [
    {
        slug: (string),
        run_ids: ['324234-23423432-234234-234234', '234324-324324-234234-23433']
    }
  ]
}

[BUG] Install of client fails

This is the error I get when trying to install:

Berders-MacBook-2:app hunvreus$ npm install
npm WARN package.json [email protected] Normalized value of bugs field is an empty object. Deleted.
npm WARN deprecated [email protected]: Please use postcss-loader instead of autoprefixer-loader
npm WARN deprecated [email protected]: cross-spawn no longer requires a build toolchain, use it instead!
npm WARN deprecated [email protected]: Please update to minimatch 3.0.2 or higher to avoid a RegExp DoS issue
npm WARN deprecated [email protected]: ReDoS vulnerability parsing Set-Cookie https://nodesecurity.io/advisories/130
npm WARN deprecated [email protected]: this package has been reintegrated into npm and is now out of date with respect to npm
npm WARN deprecated [email protected]: cross-spawn no longer requires a build toolchain, use it instead!
npm WARN deprecated [email protected]: lodash@<3.0.0 is no longer maintained. Upgrade to lodash@^4.0.0.
npm WARN deprecated [email protected]: Please update to minimatch 3.0.2 or higher to avoid a RegExp DoS issue
npm WARN deprecated [email protected]: graceful-fs v3.0.0 and before will fail on node releases >= v7.0. Please update to graceful-fs@^4.0.0 as soon as possible. Use 'npm ls graceful-fs' to find it in the tree.

> [email protected] postinstall /Users/hunvreus/Workspace/pipelines/app/node_modules/node-sass/node_modules/cross-spawn/node_modules/spawn-sync
> node postinstall

-
> [email protected] install /Users/hunvreus/Workspace/pipelines/app/node_modules/node-sass
> node scripts/install.js

Binary downloaded and installed at /Users/hunvreus/Workspace/pipelines/app/node_modules/node-sass/vendor/darwin-x64-46/binding.node

> [email protected] postinstall /Users/hunvreus/Workspace/pipelines/app/node_modules/node-sass
> node scripts/build.js

` /Users/hunvreus/Workspace/pipelines/app/node_modules/node-sass/vendor/darwin-x64-46/binding.node ` exists. 
 testing binary.
Binary is fine; exiting.

> [email protected] install /Users/hunvreus/Workspace/pipelines/app/node_modules/webpack/node_modules/watchpack/node_modules/chokidar/node_modules/fsevents
> node-pre-gyp install --fallback-to-build

[fsevents] Success: "/Users/hunvreus/Workspace/pipelines/app/node_modules/webpack/node_modules/watchpack/node_modules/chokidar/node_modules/fsevents/lib/binding/Release/node-v46-darwin-x64/fse.node" already installed
Pass --update-binary to reinstall or --build-from-source to recompile

> [email protected] postinstall /Users/hunvreus/Workspace/pipelines/app
> npm run dist


> [email protected] dist /Users/hunvreus/Workspace/pipelines/app
> cross-env NODE_ENV=production webpack -p --progress

 95% emitfs.js:549          
  return binding.open(pathModule._makeLong(path), stringToFlags(flags), mode);
                 ^

Error: ENOENT: no such file or directory, open '/Users/hunvreus/Workspace/pipelines/app/index.tpl.html'
    at Error (native)
    at Object.fs.openSync (fs.js:549:18)
    at Object.fs.readFileSync (fs.js:397:15)
    at Compiler.<anonymous> (/Users/hunvreus/Workspace/pipelines/app/webpack.config.production.js:50:43)
    at Compiler.applyPlugins (/Users/hunvreus/Workspace/pipelines/app/node_modules/webpack/node_modules/tapable/lib/Tapable.js:26:37)
    at Compiler.<anonymous> (/Users/hunvreus/Workspace/pipelines/app/node_modules/webpack/lib/Compiler.js:193:12)
    at Compiler.emitRecords (/Users/hunvreus/Workspace/pipelines/app/node_modules/webpack/lib/Compiler.js:282:37)
    at Compiler.<anonymous> (/Users/hunvreus/Workspace/pipelines/app/node_modules/webpack/lib/Compiler.js:187:11)
    at /Users/hunvreus/Workspace/pipelines/app/node_modules/webpack/lib/Compiler.js:275:11
    at Compiler.next (/Users/hunvreus/Workspace/pipelines/app/node_modules/webpack/node_modules/tapable/lib/Tapable.js:67:11)
    at /Users/hunvreus/Workspace/pipelines/app/node_modules/assets-webpack-plugin/index.js:78:9
    at /Users/hunvreus/Workspace/pipelines/app/node_modules/assets-webpack-plugin/lib/output/createQueuedWriter.js:15:7
    at /Users/hunvreus/Workspace/pipelines/app/node_modules/assets-webpack-plugin/lib/output/createOutputWriter.js:50:11
    at /Users/hunvreus/Workspace/pipelines/app/node_modules/webpack/node_modules/enhanced-resolve/node_modules/graceful-fs/graceful-fs.js:43:10
    at FSReqWrap.oncomplete (fs.js:82:15)

npm ERR! Darwin 15.4.0
npm ERR! argv "/usr/local/bin/node" "/usr/local/bin/npm" "run" "dist"
npm ERR! node v4.4.3
npm ERR! npm  v2.15.1
npm ERR! code ELIFECYCLE
npm ERR! [email protected] dist: `cross-env NODE_ENV=production webpack -p --progress`
npm ERR! Exit status 1
npm ERR! 
npm ERR! Failed at the [email protected] dist script 'cross-env NODE_ENV=production webpack -p --progress'.
npm ERR! This is most likely a problem with the getPipelines package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR!     cross-env NODE_ENV=production webpack -p --progress
npm ERR! You can get information on how to open an issue for this project with:
npm ERR!     npm bugs getPipelines
npm ERR! Or if that isn't available, you can get their info via:
npm ERR! 
npm ERR!     npm owner ls getPipelines
npm ERR! There is likely additional logging output above.

npm ERR! Please include the following file with any support request:
npm ERR!     /Users/hunvreus/Workspace/pipelines/app/npm-debug.log

npm ERR! Darwin 15.4.0
npm ERR! argv "/usr/local/bin/node" "/usr/local/bin/npm" "install"
npm ERR! node v4.4.3
npm ERR! npm  v2.15.1
npm ERR! code ELIFECYCLE
npm ERR! [email protected] postinstall: `npm run dist`
npm ERR! Exit status 1
npm ERR! 
npm ERR! Failed at the [email protected] postinstall script 'npm run dist'.
npm ERR! This is most likely a problem with the getPipelines package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR!     npm run dist
npm ERR! You can get information on how to open an issue for this project with:
npm ERR!     npm bugs getPipelines
npm ERR! Or if that isn't available, you can get their info via:
npm ERR! 
npm ERR!     npm owner ls getPipelines
npm ERR! There is likely additional logging output above.

npm ERR! Please include the following file with any support request:
npm ERR!     /Users/hunvreus/Workspace/pipelines/app/npm-debug.log

If our client is truly 100% separate from the backend, maybe we should move on with an http://electron.atom.io/ app to help DevOps folks move on with their days. This may mean as well separating the repos for the client and backend..? @JuhaS

Pipelines crash when pipeline file is missing

Typically when using links and the destination does not exist, pipeline will try to read the file and fail - crashing the app....

Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/tornado/web.py", line 1443, in _execute
    result = method(*self.path_args, **self.path_kwargs)
  File "/usr/local/lib/python2.7/site-packages/tornado/web.py", line 2800, in wrapper
    return method(self, *args, **kwargs)
  File "/usr/local/lib/python2.7/site-packages/pipelines/api/server.py", line 257, in get
    with open(os.path.join(workspace, path)) as f:
IOError: [Errno 2] No such file or directory: '/workspace/test.yml'
ERROR:tornado.access:500 GET /api/pipelines/ (172.17.0.1) 3.98ms

Bug: No status or log files when pipeline schema check fails

Expected behaviour

Pipelines should create status.json with status failed if the schema check fails

Also other exceptions when running tasks seem to not update status.json. We should have catch-all around pipeline execution that updates the status.json

webhook interface for run pipelines

  • An API to generate webhook url with slug id
  • An processor(route handler) for transforming the webhook url to pipeline and run it.

The first step is using some light approaches such as yaml files to implement it instead of DB.

Include time in status.json

Need to include various time details in the status:

  • pipeline's start / end time
  • pipeline's task start / end time

Loosening pipelines' schema

Current schema is too strict. It requires vars and plugins, plugins can be defaulted to [] but vars require actual key + value.

Need to simplify the schema to make the only required field tasks

The simplest pipeline should be:

tasks:
  - ls

Need to report crash / exception on failure to run

e.g. using prompt currently raise an exception that is not being caught and the process never completes.


---
prompt:
  var: toto

actions:
  - echo {{ var }}

current error in pipelines.:

    value = future.result()
  File "/home/wcl/pipelines_workspace/.pipelines/local/lib/python2.7/site-packages/tornado/concurrent.py", line 232, in result
    raise_exc_info(self._exc_info)
  File "/home/wcl/pipelines_workspace/.pipelines/local/lib/python2.7/site-packages/tornado/gen.py", line 1014, in run
    yielded = self.gen.throw(*exc_info)
  File "/home/wcl/pipelines_workspace/.pipelines/local/lib/python2.7/site-packages/pipelines/api/server.py", line 120, in _run_pipeline
    yield runner.run(pipeline_filepath, folder_path, params)
  File "/home/wcl/pipelines_workspace/.pipelines/local/lib/python2.7/site-packages/tornado/gen.py", line 1008, in run
    value = future.result()
  File "/home/wcl/pipelines_workspace/.pipelines/local/lib/python2.7/site-packages/concurrent/futures/_base.py", line 398, in result
    return self.__get_result()
  File "/home/wcl/pipelines_workspace/.pipelines/local/lib/python2.7/site-packages/concurrent/futures/thread.py", line 55, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/home/wcl/pipelines_workspace/.pipelines/local/lib/python2.7/site-packages/pipelines/api/server.py", line 68, in run
    return pipe.run(params=params)
  File "/home/wcl/pipelines_workspace/.pipelines/local/lib/python2.7/site-packages/pipelines/pipeline/pipeline.py", line 161, in run
    task.args = substitute_variables(pipeline_context, task.args)
  File "/home/wcl/pipelines_workspace/.pipelines/local/lib/python2.7/site-packages/pipelines/pipeline/var_processing.py", line 40, in substitute_variables
    return _loop_strings(replace_vars_func, obj)
  File "/home/wcl/pipelines_workspace/.pipelines/local/lib/python2.7/site-packages/pipelines/pipeline/var_processing.py", line 47, in _loop_strings
    new_obj = dict([(_loop_strings(func, k), _loop_strings(func, v)) for k,v in obj.items()])
  File "/home/wcl/pipelines_workspace/.pipelines/local/lib/python2.7/site-packages/pipelines/pipeline/var_processing.py", line 45, in _loop_strings
    new_obj = func(obj)
  File "/home/wcl/pipelines_workspace/.pipelines/local/lib/python2.7/site-packages/pipelines/pipeline/var_processing.py", line 28, in replace_vars_func
    raise PipelineError('Missing variable: {}'.format(variable_name))
PipelineError: Missing variable: var

GitHub auth fails with none lower() error

Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 1445, in _execute
    result = yield result
  File "/usr/local/lib/python2.7/dist-packages/tornado/gen.py", line 1008, in run
    value = future.result()
  File "/usr/local/lib/python2.7/dist-packages/tornado/concurrent.py", line 232, in result
    raise_exc_info(self._exc_info)
  File "/usr/local/lib/python2.7/dist-packages/tornado/gen.py", line 1017, in run
    yielded = self.gen.send(value)
  File "/usr/local/lib/python2.7/dist-packages/pipelines/api/ghauth.py", line 189, in get
    if filter(lambda x: x['slug'].lower() == team.lower() and x.get('organization',{}).get('name', '').lower() == org.lower(), teams):
  File "/usr/local/lib/python2.7/dist-packages/pipelines/api/ghauth.py", line 189, in <lambda>
    if filter(lambda x: x['slug'].lower() == team.lower() and x.get('organization',{}).get('name', '').lower() == org.lower(), teams):
AttributeError: 'NoneType' object has no attribute 'lower'

Make the cookie secret dynamic

The cookie secret should be dynamically generated. We could store it in the .pipeline-store.json in the workspace for example.

Handle when there are no runs

Currently the styling is off and the "runs" dropdowns shows "Invalid date" if there are no previous runs. Need to :

  • Show "No runs" in dropdown
  • Fix the styling error related to play button
  • Show "No run selected" on main content-area.

Styling fixes

There are a number of things that need styling help.

1. styling for the highlighted logs

There is a highlighter function for the logs that detects timestamps and wraps them in a element, the css for this should be revised:
image

Highlighter: https://github.com/Wiredcraft/pipelines/blob/dev/app/client/components/Task/NewTask.js#L274

2. Var prompt

If pipeline has "prompt" defined when user clicks Run we show a form that asks for the variables (this flow can be changed if needed). You can see this when you click run on the sample-deploy pipeline. The form needs styling and currently looks like:

image

The html comes from here: https://github.com/Wiredcraft/pipelines/blob/dev/app/client/components/Task/NewTask.js#L291

3. Bug hovering the history-points

When hovering the history-points two tooptips appear:

image

HTML is here: https://github.com/Wiredcraft/pipelines/blob/dev/app/client/components/Task/NewTask.js#L206

4. Webhook url for a pipeline

If a task has a webhook trigger the backend will generate a webhook identifier for it (uuid). This should be shown in the UI, probably in the full url form. Unclear where in UI we should put this.

Temporarily the webhook id is shown here:
image

Satellites / remote exec support?

Nice to have feature; especially to allow having 1 central front-facing pipelines and executioners in different locations - sometime internal within a LAN.

Proposed (high-level) implementation could be:

  • have a satellite mode for pipeline, that starts headless
  • expect to connect to a remote master
    • remote master would "see" incoming request and accept them (?)
    • remote master would display a code to include upon start of the satellite to effectively link the satellite to the master
  • the satellite would fetch the tasks from the master (?)
  • or the master would allow creation of pipelines and select which satellite to send it to
  • the satellite would maintain connection with the master via ...
  • webhooks hitting the master would then be routed back to the satellites for execution ...

Not required - and quite heavy work but .. keeping it here

Trigger pipelines using cron

Implement simple cron support within the API to run pipelines periodically.

Tasks:

  • Define the trigger-format
  • Choose suitable library (hopefully tornado-centric)
  • Implement the scheduler. Should be on the same process as the main API but decoupled in code.

Pipe the bash output to logs

Currently, logs are only shown after a task finishes (for Bash tasks). We need to pipe the stream line by line.

To do this I will:

  • Add on_task_event event
  • Trigger the on_task_event from BashExecutor for every line, and don't put the whole output to "message" at the end (avoid duplicating it in logs).

Add checkbox type for prompt

prompt:
  trigger_next_step:
    type: checkbox
    default: true

...

  - name: Build image
    type: bash
    cmd: |
      ...
      {% if trigger_next_step %}
        echo "Trigger deployment"
        curl -X POST -d '{...}' -H 'Content-Type: application/json' \
          https://pipelines.com/api/webhook/<uuid>
      {% endif %}

Limit size of input parameter logs

Problem
Currently we print the whole webhook content at the start of the pipeline. This leads to very verbose logs in case of github webhooks (content is long) and making the logs less usable in general.

Log format:

Pipeline started. vars: .....(n+1 pages of content after this)

Solution

I will probably add a limit on the log length of the webhook content to 800 characters.

@zbal making this high prio since it affects current UX in current projects.

API spec

GET /api/pipelines

Returns:

{
  pipelines: [
    {
        id: (string),
        name: (string)
        description: (string)
    }
  ]
}

POST /api/pipelines/1/run

Body: { params: {..} } (no need for params at first)

Returns:

{
  task_id: (string)
}

GET /api/pipelines/1/status/{task-id} (returns status)

UI can poll this endpoint. We can improve this into websocket later

Returns:

  status: (string) ("success", "error", "pending")

GET /api/pipelines/1/log/{task-id} (returns logs and status)

UI can poll this endpoint. We can improve this into websocket later

Returns:

  log: (string) (this can be very multiline long)

Highlight the running vars in the hover popup?

Currently, we only know when the pipeline got run, and whether it was success/failure. There is no way other than checking in the logs what were the supplied parameters.

Can we add this information in the hover box? at least that will help when a task is used as a multi purpose task.

screen shot 2016-11-21 at 09 39 56

(e.g. adding the following would be useful:

vars:
  - var1: abcd
  - var2: efgh
  - var3: toto (prompted)

)

Bug: HTML escape log lines

Currently the log lines are not html escaped which leads to some unexpected formatting in some cases:
image

We should properly escape the log lines. The color highlighting may currently rely on the logs not being escaped, so that has to be considered when fixing this.

Optimize the log updating

Currently when you run a pipeline the UI will poll the logs endpoint until it finishes. This means every time it will fetch all of the logs which is far from optimal.

Solutions:

    1. Send row-start number to the logs endpoint to tell backend to only return rows after row-start. UI would then merge the given rows to the previous rows.
    1. Implement websocket to stream the logs to frontend.

For long term the websocket sounds better, and we can leverage it for other things as well.

API freeze when task do not complete

The api freeze in some condition, typically when tasks ran but never complete...

Killing the tasks (via ps aux and kill) effectively "release" the API, and the operation resumes.

See below some sample of the logs - notice that the process was "stuck" for quite a long time, then I killed the processes at the time the BashExec failed messages appears.

2017-03-22 08:35:02,201 - DEBUG - Getting pipeline logs
2017-03-23 01:17:01,650 - DEBUG - BashExec failed
2017-03-23 01:17:01,651 - DEBUG - BashExec failed
2017-03-23 01:17:01,652 - DEBUG - Task finished. Result: {'status': 1, 'message': 'Bash task finished'}
2017-03-23 01:17:01,652 - DEBUG - Skipping task: Update local code base
2017-03-23 01:17:01,652 - DEBUG - Skipping task: Deploy updated container
2017-03-23 01:17:01,653 - DEBUG - Skipping task: Send slack notification
2017-03-23 01:17:01,653 - DEBUG - Pipeline finished. Status: failure

/cc @JuhaS @xuqingfeng

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.