recast-hep / recast-atlas Goto Github PK
View Code? Open in Web Editor NEWCLI for ATLAS RECAST contributors
Home Page: https://recast.docs.cern.ch/
License: Apache License 2.0
CLI for ATLAS RECAST contributors
Home Page: https://recast.docs.cern.ch/
License: Apache License 2.0
There are multiple environmental variables that can affect the RECAST config
recast-atlas/src/recastatlas/config.py
Lines 22 to 72 in 63d06a7
though these aren't made explicitly clear to users. It would be helpful to add the ability to check all environmental variables that can affect the config by adding something like recast backends config --check
, which would act similar to recast backends ls --check
in that it would dump all environmental information for backends to stdout.
python -m pip install --upgrade '.[reana,local]' 'pyyaml>=6.0'
...
ERROR: Cannot install pyyaml>=6.0, reana-commons[snakemake,yadage]==0.8.0, reana-commons[snakemake,yadage]==0.8.1, reana-commons[snakemake,yadage]==0.8.2, reana-commons[snakemake,yadage]==0.8.3, reana-commons[snakemake,yadage]==0.8.4, reana-commons[snakemake,yadage]==0.8.5, recast-atlas, recast-atlas==0.3.0 and recast-atlas[local,reana]==0.3.0 because these package versions have conflicting dependencies.
The conflict is caused by:
The user requested pyyaml>=6.0
recast-atlas[local,reana] 0.3.0 depends on pyyaml>=5.1
yadage 0.21.0 depends on pyyaml
packtivity 0.16.2 depends on pyyaml
yadage-schemas 0.10.7 depends on pyyaml
recast-atlas 0.3.0 depends on pyyaml>=5.1
reana-commons[snakemake,yadage] 0.8.5 depends on PyYAML<6.0 and >=5.1
The user requested pyyaml>=6.0
recast-atlas[local,reana] 0.3.0 depends on pyyaml>=5.1
yadage 0.21.0 depends on pyyaml
packtivity 0.16.2 depends on pyyaml
yadage-schemas 0.10.7 depends on pyyaml
recast-atlas 0.3.0 depends on pyyaml>=5.1
reana-commons[snakemake,yadage] 0.8.4 depends on PyYAML<6.0 and >=5.1
The user requested pyyaml>=6.0
recast-atlas[local,reana] 0.3.0 depends on pyyaml>=5.1
yadage 0.21.0 depends on pyyaml
packtivity 0.16.2 depends on pyyaml
yadage-schemas 0.10.7 depends on pyyaml
recast-atlas 0.3.0 depends on pyyaml>=5.1
reana-commons[snakemake,yadage] 0.8.3 depends on PyYAML<6.0 and >=5.1
The user requested pyyaml>=6.0
recast-atlas[local,reana] 0.3.0 depends on pyyaml>=5.1
yadage 0.21.0 depends on pyyaml
packtivity 0.16.2 depends on pyyaml
yadage-schemas 0.10.7 depends on pyyaml
recast-atlas 0.3.0 depends on pyyaml>=5.1
reana-commons[snakemake,yadage] 0.8.2 depends on PyYAML<6.0 and >=5.1
The user requested pyyaml>=6.0
recast-atlas[local,reana] 0.3.0 depends on pyyaml>=5.1
yadage 0.21.0 depends on pyyaml
packtivity 0.16.2 depends on pyyaml
yadage-schemas 0.10.7 depends on pyyaml
recast-atlas 0.3.0 depends on pyyaml>=5.1
reana-commons[snakemake,yadage] 0.8.1 depends on PyYAML<6.0 and >=5.1
The user requested pyyaml>=6.0
recast-atlas[local,reana] 0.3.0 depends on pyyaml>=5.1
yadage 0.21.0 depends on pyyaml
packtivity 0.16.2 depends on pyyaml
yadage-schemas 0.10.7 depends on pyyaml
recast-atlas 0.3.0 depends on pyyaml>=5.1
reana-commons[snakemake,yadage] 0.8.0 depends on PyYAML<6.0 and >=5.1
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
Lines 5 to 12 in b660442
Line 14 in b660442
this seems to be required by reana
do it!
Add functionality to check the default user in the image specified for each step in the steps.yml
, and automatically update the kubernetes_uid: XXX
resource on the fly based on the auto-detected UID.
@matthewfeickert correctly points out this repo is still a mess
from a discussion with @matteo-bauce it seems it would be prudent to be smarter in checking which files to look at when building the recast catalogue. Otherwise e.g. recast catalogue ls
can just crash when it encounters a yml file which is not a catalogue entry.
Perhaps something for @AlexSchuy
I might fix this myself when I have time, but .yaml
extension should be supported as it is actually the recommended one for YAML files.
Similar to how pyhf
does, it would be helpful to catch breaking changes like yadage/yadage#116 in advance.
In the current README there is the example for using RECAST on LXPLUS
Lines 37 to 44 in 4264157
However, lxplus-cloud.cern.ch
doesn't seem to be a valid address. I think(?) this might now be lxplus8.cern.ch
given https://clouddocs.web.cern.ch/clients/lxplus.html (@lukasheinrich can you confirm?). However, if one logs onto lxplus8
then the following fails
[feickert@lxplus8s05 ~]$ readlink -f ~recast/public/setup.sh
/afs/cern.ch/user/r/recast/public/setup.sh
[feickert@lxplus8s05 ~]$ cat $(readlink -f ~recast/public/setup.sh)
export RECAST_DEFAULT_RUN_BACKEND=local
export RECAST_DEFAULT_BUILD_BACKEND=kubernetes
export PACKTIVITY_CONTAINER_RUNTIME=singularity
export SINGULARITY_CACHEDIR="/tmp/$(whoami)/singularity"
mkdir -p $SINGULARITY_CACHEDIR
# https://twitter.com/lukasheinrich_/status/1021398718996713475
# http://click.pocoo.org/5/python3/
export LC_ALL=en_US.utf-8
export LANG=en_US.utf-8
scl_source enable rh-python36
source ~recast/public/yadage/venv/bin/activate
$(recast catalogue add /eos/project/r/recast/atlas/catalogue)
export KUBECONFIG=/eos/project/r/recast/atlas/cluster/clusterconfig
export PATH=$PATH:~recast/public/bin
[feickert@lxplus8s05 ~]$ command -v scl_source # no output, so scl_source not found!
[feickert@lxplus8s05 ~]$ . ~recast/public/setup.sh
-bash: scl_source: command not found
/afs/cern.ch/user/r/recast/public/yadage/venv/bin/python3: error while loading shared libraries: libpython3.6m.so.rh-python36-1.0: cannot open shared object file: No such file or directory
(venv) [feickert@lxplus8s05 ~]$
So the public RECAST setup script shouldn't rely on scl_source
but still needs Python 3 to get its virtual environment setup.
The yadage INFO
output from the recast-run
is the same regardless of the --loglevel
option given to the run command. Eg. using the helloworld
example,
recast --loglevel DEBUG run examples/helloworld --backend docker
and
recast --loglevel CRITICAL run examples/helloworld --backend docker
give the same output:
2020-03-24 18:00:53,276 | packtivity.asyncback | INFO | configured pool size to 2
2020-03-24 18:00:53,379 | yadage.creators | INFO | initializing workflow with initdata: {'name': 'hello'} discover: True relative: True
2020-03-24 18:00:53,381 | adage.pollingexec | INFO | preparing adage coroutine.
2020-03-24 18:00:53,381 | adage | INFO | starting state loop.
2020-03-24 18:00:53,450 | yadage.wflowview | INFO | added </init:0|defined|unknown>
2020-03-24 18:00:53,624 | yadage.wflowview | INFO | added </hello_world:0|defined|unknown>
2020-03-24 18:00:53,733 | adage.pollingexec | INFO | submitting nodes [</init:0|defined|known>]
2020-03-24 18:00:53,789 | pack.init.step | INFO | publishing data: <TypedLeafs: {u'name': u'hello'}>
2020-03-24 18:00:53,790 | adage | INFO | unsubmittable: 0 | submitted: 0 | successful: 0 | failed: 0 | total: 2 | open rules: 0 | applied rules: 2
2020-03-24 18:00:53,927 | adage.node | INFO | node ready </init:0|success|known>
2020-03-24 18:00:53,927 | adage.pollingexec | INFO | submitting nodes [</hello_world:0|defined|known>]
2020-03-24 18:00:53,933 | pack.hello_world.ste | INFO | starting file logging for topic: step
2020-03-24 18:00:58,521 | adage.node | INFO | node ready </hello_world:0|success|known>
2020-03-24 18:00:58,544 | adage.controllerutil | INFO | no nodes can be run anymore and no rules are applicable
2020-03-24 18:00:58,545 | adage.controllerutil | INFO | no nodes can be run anymore and no rules are applicable
2020-03-24 18:00:58,547 | adage | INFO | unsubmittable: 0 | submitted: 0 | successful: 2 | failed: 0 | total: 2 | open rules: 0 | applied rules: 2
2020-03-24 18:01:05,406 | adage | INFO | adage state loop done.
2020-03-24 18:01:05,407 | adage | INFO | execution valid. (in terms of execution order)
2020-03-24 18:01:05,407 | adage | INFO | workflow completed successfully.
2020-03-24 18:01:05,408 | yadage.steering_api | INFO | done. dumping workflow to disk.
2020-03-24 18:01:05,418 | yadage.steering_api | INFO | visualizing workflow.
RECAST result examples/helloworld recast-6626ff86:
--------------
- name: My Result
value: Hello my Name is hello
Is it possible to propagate the --loglevel
specification to the yadage backend logging?
on a fresh venv installing recast-atlas[reana]
:
reana-client secrets-add --help
seems too fail due to
Traceback (most recent call last):
File "/private/tmp/test_reana/_venv/bin/reana-client", line 6, in <module>
from pkg_resources import load_entry_point
File "/private/tmp/test_reana/_venv/lib/python3.7/site-packages/pkg_resources/__init__.py", line 3126, in <module>
@_call_aside
File "/private/tmp/test_reana/_venv/lib/python3.7/site-packages/pkg_resources/__init__.py", line 3110, in _call_aside
f(*args, **kwargs)
File "/private/tmp/test_reana/_venv/lib/python3.7/site-packages/pkg_resources/__init__.py", line 3139, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/private/tmp/test_reana/_venv/lib/python3.7/site-packages/pkg_resources/__init__.py", line 581, in _build_master
ws.require(__requires__)
File "/private/tmp/test_reana/_venv/lib/python3.7/site-packages/pkg_resources/__init__.py", line 898, in require
needed = self.resolve(parse_requirements(requirements))
File "/private/tmp/test_reana/_venv/lib/python3.7/site-packages/pkg_resources/__init__.py", line 784, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'isoduration; extra == "format"' distribution was not found and is required by jsonschema
has something recently changed @matthewfeickert ?
this seemss to be too deeeply nested docker (note it mounts docker.sock which might not be avaliable on GHA). c.f. maxheld83/ghactions#307
Originally posted by @lukasheinrich in #46 (comment)
If recast-atlas
is installed in a clean venv, recast backends ls --check
will fails as it checks for all backends and not just the installed backends.
$ docker run --rm -it python:3.8 /bin/bash
root@232d8cb7cdcf:/# python -m venv venv && . venv/bin/activate
(venv) root@232d8cb7cdcf:/# python -m pip --quiet install --upgrade pip "setuptools<58.0.0" wheel six
(venv) root@232d8cb7cdcf:/# python -m pip install recast-atlas
(venv) root@232d8cb7cdcf:/# pip freeze | grep recast-atlas
recast-atlas==0.1.8
(venv) root@232d8cb7cdcf:/# recast backends ls --check
NAME DESCRIPTION STATUS
Traceback (most recent call last):
File "/venv/bin/recast", line 8, in <module>
sys.exit(recastatlas())
File "/venv/lib/python3.8/site-packages/click/core.py", line 1128, in __call__
return self.main(*args, **kwargs)
File "/venv/lib/python3.8/site-packages/click/core.py", line 1053, in main
rv = self.invoke(ctx)
File "/venv/lib/python3.8/site-packages/click/core.py", line 1659, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/venv/lib/python3.8/site-packages/click/core.py", line 1659, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/venv/lib/python3.8/site-packages/click/core.py", line 1395, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/venv/lib/python3.8/site-packages/click/core.py", line 754, in invoke
return __callback(*args, **kwargs)
File "/venv/lib/python3.8/site-packages/recastatlas/subcommands/backends.py", line 18, in ls
status = "OK" if check_backend(k) else "NOT OK"
File "/venv/lib/python3.8/site-packages/recastatlas/backends/__init__.py", line 90, in check_backend
return BACKENDS[backend].check_backend()
KeyError: 'local
This might be the indented behavior to motivate all backends being installed, but it doesn't seem helpful for diagnostics.
Thoughts @lukasheinrich?
It seems on lxplus9 that apptainer
is able to run a container image from Docker Hub at first just fine, but if the Singularity/Apptainer variables are set to be able to acces private CERN GitLab container registries then this causes apptainer
to fail when trying to interact with DockerHub images.
Example:
$ ssh lxplus9.cern.ch
[feickert@lxplus916 ~]$ export APPTAINER_CACHEDIR="/tmp/${USER}/singularity"
[feickert@lxplus916 ~]$ export SINGULARITY_CACHEDIR="${APPTAINER_CACHEDIR}"
[feickert@lxplus916 ~]$ mkdir -p "${APPTAINER_CACHEDIR}"
[feickert@lxplus916 ~]$ apptainer exec -C docker://eschanet/docker_pyhf:v0.2 bash
INFO: Converting OCI blobs to SIF format
INFO: Starting build...
...
INFO: Creating SIF file...
Apptainer>
Apptainer> pyhf --version
pyhf, version 0.5.3
Apptainer> exit
exit
[feickert@lxplus916 ~]$
[feickert@lxplus916 ~]$ export APPTAINER_DOCKER_USERNAME=#secret
[feickert@lxplus916 ~]$ export APPTAINER_DOCKER_PASSWORD=#secret
[feickert@lxplus916 ~]$ apptainer exec -C docker://eschanet/docker_pyhf:v0.2 bash
FATAL: Unable to handle docker://eschanet/docker_pyhf:v0.2 uri: failed to get checksum for docker://eschanet/docker_pyhf:v0.2: unable to retrieve auth token: invalid username/password: unauthorized: incorrect username or password
[feickert@lxplus916 ~]$ unset APPTAINER_DOCKER_USERNAME
[feickert@lxplus916 ~]$ unset APPTAINER_DOCKER_PASSWORD
[feickert@lxplus916 ~]$ apptainer exec -C docker://eschanet/docker_pyhf:v0.2 bash
INFO: Using cached SIF image
Apptainer> exit
exit
[feickert@lxplus916 ~]$
So I'm not sure how to work around this for people that want to test on lxplus but then use images from both private CERN GitLab container registries as well as public Docker Hub.
This will affect https://gitlab.cern.ch/recast-atlas/susy/ana-susy-2019-08.
Developing Recast workflows on ARM chips (e.g. Apple M1/2) is currently limited as most docker images only support x86 architectures.
Natively docker images are tied to processor architectures. Therefore running images which are built for x86 architectures is not possible on ARM architectures. When using the Recast docker backend, images are started within images which adds an extra layer of complexity.
I currently see two solutions to the issue:
docker run --platform linux/amd64 ...
or env var export DOCKER_DEFAULT_PLATFORM=linux/amd64
). While this can easily be enabled for the docker image of Recast itself (currently recast/recastatlas:v0.3.0
), enabling it for the docker images that Recast starts would require changes in the Recast code. My temporary local solution can be found here. Using emulation at runtime is easy but can be slow.I am happy to discuss further steps to enable the development of Recast workflows on ARM architectures.
The
PACKTIVITY_DOCKER_CMD_MOD
is not needed in my case.
...
If you are interested we could troubleshoot why you need thePACKTIVITY_DOCKER_CMD_MOD
and I don't.
But we do not necessarily need to do this and I am also happy to just leave things as they are.
Also if we troubleshoot this we should probably move the discussion to a new issue.
Originally posted by @Nollde in #118 (comment)
Depending on the id and permissons of the user there can be situations in which the environment setup for using the 'local' backend is not the same. This should ideally be unified and abstraced away from the user so that they only need to worry about using the simple CLI API.
i'm ashamed of this repo @matthewfeickert :-p
A question was brought up on the analysis preservation mattermost channel as to whether it's possible to supply fields specified in the recast.yml file as command-line parameters for quick testing.
As far as I can tell this is not currently possible. If this is accurate, is this something that would be realistic and generally useful enough to consider adding as a feature?
For testing workflows quickly locally one could imagine that it would be useful to not have to pull a remote Docker image but be able to use a local one (example: Trying to test how changes to a Docker image affect a workflow). At the moment, there doesn't seem to be anyway to specify this when defining an environment, e.g.
environment:
environment_type: docker-encapsulated
image: atlas/analysisbase
imagetag: 21.2.174
and there doesn't seem to be a way to do this with packtivity
v0.14.24
's environment handlers.
@lukasheinrich Are there particular reasons that you could think of that make this a bad idea?
Dear experts,
I was wondering if there is a way to assign manually the cpu cores and memory limits to the job running via docker as the backend so that I can configure the resources to my jobs better?
Many thanks!
@lukasheinrich This LGTM, but as a follow up PR what are your thoughts about just going full pathlib
? By design, pathlib
doesn't have trailing slashes.
Originally posted by @matthewfeickert in #53 (review)
In a clean venv
$ docker run --rm -it python:3.8 /bin/bash
root@232d8cb7cdcf:/# python -m venv venv && . venv/bin/activate
(venv) root@232d8cb7cdcf:/# python -m pip --quiet install --upgrade pip "setuptools<58.0.0" wheel six
(venv) root@232d8cb7cdcf:/# python -m pip install recast-atlas
(venv) root@232d8cb7cdcf:/# pip freeze | grep recast-atlas
recast-atlas==0.1.8
(venv) root@232d8cb7cdcf:/# recast backends ls --check
Traceback (most recent call last):
File "/venv/bin/recast", line 5, in <module>
from recastatlas.cli import recastatlas
File "/venv/lib/python3.8/site-packages/recastatlas/cli.py", line 5, in <module>
from .subcommands.catalogue import catalogue
File "/venv/lib/python3.8/site-packages/recastatlas/subcommands/catalogue.py", line 12, in <module>
from ..testing import validate_entry
File "/venv/lib/python3.8/site-packages/recastatlas/testing.py", line 1, in <module>
import yadageschemas
File "/venv/lib/python3.8/site-packages/yadageschemas/__init__.py", line 8, in <module>
from .validator import validate_spec
File "/venv/lib/python3.8/site-packages/yadageschemas/validator.py", line 4, in <module>
from .dialects import raw_with_defaults
File "/venv/lib/python3.8/site-packages/yadageschemas/dialects/raw_with_defaults.py", line 118, in <module>
import six.moves.urllib as urllib
ModuleNotFoundError: No module named 'six'
yadageschemas
uses six
in yadageschemas/dialects/raw_with_defaults.py
but doesn't specify this in its requirements
install_requires = [
'jsonref',
'pyyaml',
'requests[security]>=2.9',
'jsonschema',
'click',
],
This is a yadage-schemas
problem, but a stopgap would be to temporarily add six
to recast-atlas
's requires.
In versions 0.1.1
through 0.1.8
(latest), when running recast workflows, the workflow visualization lacks titles for each of the steps and artefacts. In place, square placeholder glyphs are used. It appears that all workflow steps are in the correct places with the expected layout, but there are no titles. If I downgrade to recast 0.1.0, the titles in the workflow return to normal.
This issue appears in the .gif, .png, and .pdf versions of the visualization.
I am running macOS Big Sur 11.6 (20G165)
This can be minimally reproduced with the following:
pip install recast-atlas==0.1.0
recast run examples/rome
which produces _yadage/yadage_workflow_instance.png
of
compared to
pip install recast-atlas==0.1.1
recast run examples/rome
In the current RECAST docs, one of the first things a new users sees is the rome example:
pip install recast-atlas
recast run examples/rome # using `--backend docker` by default
This command will work and produce the expected output from the docs. However, running this example with the --backend local
option will fail.
$ cd /tmp
feickert@ThinkPad-X1:/tmp$ pyenv virtualenv 3.8.11 example
feickert@ThinkPad-X1:/tmp$ pyenv activate example
(example) feickert@ThinkPad-X1:/tmp$ python -m pip --quiet install --upgrade pip setuptools wheel
(example) feickert@ThinkPad-X1:/tmp$ python -m pip --quiet install recast-atlas
(example) feickert@ThinkPad-X1:/tmp$ pip show recast-atlas
Name: recast-atlas
Version: 0.1.7
Summary: RECAST for ATLAS at the LHC
Home-page: UNKNOWN
Author: Lukas Heinrich
Author-email: [email protected]
License: UNKNOWN
Location: /home/feickert/.pyenv/versions/3.8.11/envs/example/lib/python3.8/site-packages
Requires: click, yadage-schemas, jsonschema, pyyaml
Required-by:
(example) feickert@ThinkPad-X1:/tmp$ recast catalogue ls
NAME DESCRIPTION EXAMPLES TAGS
atlas/atlas-conf-2018-041 ATLAS MBJ default
examples/checkmate1 CheckMate Tutorial Example (Herwig + CM1) default
examples/checkmate2 CheckMate Tutorial Example (Herwig + CM2) default
examples/rome Example from ATLAS Exotics Rome Workshop 2018 default,newsignal
testing/busyboxtest Simple, lightweight Functionality Test default
(example) feickert@ThinkPad-X1:/tmp$ recast run examples/rome # using `--backend docker` by default
Unable to find image 'recast/recastatlas:v0.1.7' locally
v0.1.7: Pulling from recast/recastatlas
ca3cd42a7c95: Already exists
fbd7def92be5: Already exists
071c71d4725b: Already exists
f725c26e6c96: Already exists
3ca6f85a1371: Already exists
f46e76fe2b43: Already exists
ea4bb38bf23d: Already exists
37e1a0b67691: Pull complete
42618669bd89: Pull complete
c56cd0b9b5fe: Pull complete
871520166c21: Pull complete
e7e91df4609c: Pull complete
0290390f308a: Pull complete
Digest: sha256:a5b26672db39fa6fc8b7a620d42271cab6fed47e231b93d5463491f229c11040
Status: Downloaded newer image for recast/recastatlas:v0.1.7
2021-09-20 20:41:06,930 | packtivity.asyncback | INFO | configured pool size to 12
2021-09-20 20:41:07,384 | yadage.creators | INFO | initializing workflow with initdata: {'did': 404958, 'dxaod_file': 'https://recastwww.web.cern.ch/recastwww/data/reana-recast-demo/mc15_13TeV.123456.cap_recast_demo_signal_one.root', 'xsec_in_pb': 0.00122} discover: True relative: True
2021-09-20 20:41:07,384 | adage.pollingexec | INFO | preparing adage coroutine.
2021-09-20 20:41:07,384 | adage | INFO | starting state loop.
2021-09-20 20:41:07,491 | yadage.wflowview | INFO | added </init:0|defined|unknown>
2021-09-20 20:41:08,660 | yadage.wflowview | INFO | added </eventselection:0|defined|unknown>
2021-09-20 20:41:10,262 | yadage.wflowview | INFO | added </statanalysis:0|defined|unknown>
2021-09-20 20:41:12,247 | adage.pollingexec | INFO | submitting nodes [</init:0|defined|known>]
2021-09-20 20:41:12,946 | pack.init.step | INFO | publishing data: <TypedLeafs: {'did': 404958, 'dxaod_file': 'https://recastwww.web.cern.ch/recastwww/data/reana-recast-demo/mc15_13TeV.123456.cap_recast_demo_signal_one.root', 'xsec_in_pb': 0.00122}>
2021-09-20 20:41:12,946 | adage | INFO | unsubmittable: 0 | submitted: 0 | successful: 0 | failed: 0 | total: 3 | open rules: 0 | applied rules: 3
2021-09-20 20:41:14,961 | adage.node | INFO | node ready </init:0|success|known>
2021-09-20 20:41:14,961 | adage.pollingexec | INFO | submitting nodes [</eventselection:0|defined|known>]
2021-09-20 20:41:14,962 | pack.eventselection. | INFO | starting file logging for topic: step
2021-09-20 20:41:28,979 | adage.node | INFO | node ready </eventselection:0|success|known>
2021-09-20 20:41:28,980 | adage.pollingexec | INFO | submitting nodes [</statanalysis:0|defined|known>]
2021-09-20 20:41:28,981 | pack.statanalysis.st | INFO | starting file logging for topic: step
2021-09-20 20:41:35,157 | adage.node | INFO | node ready </statanalysis:0|success|known>
2021-09-20 20:41:35,178 | adage.controllerutil | INFO | no nodes can be run anymore and no rules are applicable
2021-09-20 20:41:35,178 | adage.controllerutil | INFO | no nodes can be run anymore and no rules are applicable
2021-09-20 20:41:35,178 | adage | INFO | unsubmittable: 0 | submitted: 0 | successful: 3 | failed: 0 | total: 3 | open rules: 0 | applied rules: 3
2021-09-20 20:41:38,746 | adage | INFO | adage state loop done.
2021-09-20 20:41:38,746 | adage | INFO | execution valid. (in terms of execution order)
2021-09-20 20:41:38,747 | adage | INFO | workflow completed successfully.
2021-09-20 20:41:38,747 | yadage.steering_api | INFO | done. dumping workflow to disk.
2021-09-20 20:41:38,749 | yadage.steering_api | INFO | visualizing workflow.
/usr/lib/python3.8/subprocess.py:848: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
self.stdout = io.open(c2pread, 'rb', bufsize)
/usr/lib/python3.8/subprocess.py:842: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
self.stdin = io.open(p2cwrite, 'wb', bufsize)
/usr/lib/python3.8/subprocess.py:848: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
self.stdout = io.open(c2pread, 'rb', bufsize)
/usr/lib/python3.8/subprocess.py:848: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
self.stdout = io.open(c2pread, 'rb', bufsize)
/usr/lib/python3.8/subprocess.py:842: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
self.stdin = io.open(p2cwrite, 'wb', bufsize)
/usr/lib/python3.8/subprocess.py:848: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
self.stdout = io.open(c2pread, 'rb', bufsize)
2021-09-20 15:41:39,542 | recastatlas.subcomma | INFO | RECAST run finished.
RECAST result examples/rome recast-03fef822:
--------------
- name: CLs 95% based upper limit on poi
value:
exp: 0.8924846399964371
exp_m1: 0.6377501820447065
exp_m2: 0.4731739008380644
exp_p1: 1.2720762961732819
exp_p2: 1.7545752712294322
obs: 1.3352971254860764
- name: CLs 95% at nominal poi
value:
exp: 0.25999040745937085
exp_m1: 0.10547655600578199
exp_m2: 0.03889040527686523
exp_p1: 0.5345040672498215
exp_p2: 0.8276574946063575
obs: 0.574709475331039
(example) feickert@ThinkPad-X1:/tmp$ python -m pip --quiet install --upgrade "git+https://github.com/recast-hep/recast-atlas.git@4eba02ea6678f253a9bb578cf70385d01b581f32#egg=recast-atlas[local]" # To have local extra not fail on install
(example) feickert@ThinkPad-X1:/tmp$ recast run examples/rome # Rerun of the same command with upgraded release still works
(example) feickert@ThinkPad-X1:/tmp$ recast run examples/rome --backend local
2021-09-20 15:43:21,961 | packtivity.asyncback | INFO | configured pool size to 12
2021-09-20 15:43:22,353 | yadage.creators | INFO | initializing workflow with initdata: {'did': 404958, 'dxaod_file': 'https://recastwww.web.cern.ch/recastwww/data/reana-recast-demo/mc15_13TeV.123456.cap_recast_demo_signal_one.root', 'xsec_in_pb': 0.00122} discover: True relative: True
2021-09-20 15:43:22,353 | adage.pollingexec | INFO | preparing adage coroutine.
2021-09-20 15:43:22,353 | adage | INFO | starting state loop.
2021-09-20 15:43:22,421 | yadage.wflowview | INFO | added </init:0|defined|unknown>
2021-09-20 15:43:23,476 | yadage.wflowview | INFO | added </eventselection:0|defined|unknown>
2021-09-20 15:43:24,965 | yadage.wflowview | INFO | added </statanalysis:0|defined|unknown>
2021-09-20 15:43:26,822 | adage.pollingexec | INFO | submitting nodes [</init:0|defined|known>]
2021-09-20 15:43:27,427 | pack.init.step | INFO | publishing data: <TypedLeafs: {'did': 404958, 'dxaod_file': 'https://recastwww.web.cern.ch/recastwww/data/reana-recast-demo/mc15_13TeV.123456.cap_recast_demo_signal_one.root', 'xsec_in_pb': 0.00122}>
2021-09-20 15:43:27,427 | adage | INFO | unsubmittable: 0 | submitted: 0 | successful: 0 | failed: 0 | total: 3 | open rules: 0 | applied rules: 3
2021-09-20 15:43:29,369 | adage.node | INFO | node ready </init:0|success|known>
2021-09-20 15:43:29,369 | adage.pollingexec | INFO | submitting nodes [</eventselection:0|defined|known>]
2021-09-20 15:43:29,370 | pack.eventselection. | INFO | starting file logging for topic: step
/home/feickert/.pyenv/versions/3.8.11/lib/python3.8/subprocess.py:848: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
self.stdout = io.open(c2pread, 'rb', bufsize)
/home/feickert/.pyenv/versions/3.8.11/lib/python3.8/subprocess.py:842: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
self.stdin = io.open(p2cwrite, 'wb', bufsize)
/home/feickert/.pyenv/versions/3.8.11/lib/python3.8/subprocess.py:848: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
self.stdout = io.open(c2pread, 'rb', bufsize)
2021-09-20 15:43:31,511 | pack.eventselection. | ERROR | non-zero return code raising exception
2021-09-20 15:43:31,511 | pack.eventselection. | ERROR | subprocess failed. code: 134, command docker run --rm -i --cidfile /tmp/recast-64ee8b66/eventselection/_packtivity/eventselection.cid -v /tmp/recast-64ee8b66/eventselection:/tmp/recast-64ee8b66/eventselection:rw -v /tmp/recast-64ee8b66/init:/tmp/recast-64ee8b66/init:rw reanahub/reana-demo-atlas-recast-eventselection:1.0 sh -c bash
Traceback (most recent call last):
File "/home/feickert/.pyenv/versions/example/lib/python3.8/site-packages/packtivity/handlers/execution_handlers.py", line 332, in execute_and_tail_subprocess
raise subprocess.CalledProcessError(
subprocess.CalledProcessError: Command 'docker run --rm -i --cidfile /tmp/recast-64ee8b66/eventselection/_packtivity/eventselection.cid -v /tmp/recast-64ee8b66/eventselection:/tmp/recast-64ee8b66/eventselection:rw -v /tmp/recast-64ee8b66/init:/tmp/recast-64ee8b66/init:rw reanahub/reana-demo-atlas-recast-eventselection:1.0 sh -c bash' returned non-zero exit status 134.
2021-09-20 15:43:31,511 | pack.eventselection. | ERROR | job execution if job {'name': 'eventselection', 'wflow_node_id': '734229ac-e75a-4f21-b412-456a2db08ff7', 'wflow_offset': '', 'wflow_stage': 'eventselection', 'wflow_stage_node_idx': 0, 'wflow_hints': {'is_purepub': False}} raise exception exception
Traceback (most recent call last):
File "/home/feickert/.pyenv/versions/example/lib/python3.8/site-packages/packtivity/handlers/execution_handlers.py", line 332, in execute_and_tail_subprocess
raise subprocess.CalledProcessError(
subprocess.CalledProcessError: Command 'docker run --rm -i --cidfile /tmp/recast-64ee8b66/eventselection/_packtivity/eventselection.cid -v /tmp/recast-64ee8b66/eventselection:/tmp/recast-64ee8b66/eventselection:rw -v /tmp/recast-64ee8b66/init:/tmp/recast-64ee8b66/init:rw reanahub/reana-demo-atlas-recast-eventselection:1.0 sh -c bash' returned non-zero exit status 134.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/feickert/.pyenv/versions/example/lib/python3.8/site-packages/packtivity/syncbackends.py", line 192, in run_packtivity
run_in_env(job, env, state, metadata, pack_config, exec_config)
File "/home/feickert/.pyenv/versions/example/lib/python3.8/site-packages/packtivity/syncbackends.py", line 130, in run_in_env
return handler(exec_config, environment, state, job, metadata)
File "/home/feickert/.pyenv/versions/example/lib/python3.8/site-packages/packtivity/handlers/execution_handlers.py", line 460, in docker_enc_handler
result = run(config, state, log, metadata, rspec)
File "/home/feickert/.pyenv/versions/example/lib/python3.8/site-packages/packtivity/handlers/execution_handlers.py", line 404, in run_containers_in_docker_runtime
execute_and_tail_subprocess(
File "/home/feickert/.pyenv/versions/example/lib/python3.8/site-packages/packtivity/handlers/execution_handlers.py", line 340, in execute_and_tail_subprocess
raise RuntimeError("failed container execution subprocess. %s", command_string)
RuntimeError: ('failed container execution subprocess. %s', 'docker run --rm -i --cidfile /tmp/recast-64ee8b66/eventselection/_packtivity/eventselection.cid -v /tmp/recast-64ee8b66/eventselection:/tmp/recast-64ee8b66/eventselection:rw -v /tmp/recast-64ee8b66/init:/tmp/recast-64ee8b66/init:rw reanahub/reana-demo-atlas-recast-eventselection:1.0 sh -c bash')
2021-09-20 15:43:33,371 | adage.node | INFO | node ready </eventselection:0|failed|known>
2021-09-20 15:43:33,391 | adage.controllerutil | INFO | no nodes can be run anymore and no rules are applicable
2021-09-20 15:43:33,392 | adage.controllerutil | INFO | no nodes can be run anymore and no rules are applicable
2021-09-20 15:43:33,392 | adage | ERROR | some weird exception caught in adage process loop
Traceback (most recent call last):
File "/home/feickert/.pyenv/versions/example/lib/python3.8/site-packages/adage/__init__.py", line 51, in run_polling_workflow
for stepnum, controller in enumerate(coroutine):
File "/home/feickert/.pyenv/versions/example/lib/python3.8/site-packages/adage/pollingexec.py", line 89, in adage_coroutine
raise RuntimeError('workflow finished but failed')
RuntimeError: workflow finished but failed
2021-09-20 15:43:33,393 | adage | ERROR | node: </eventselection:0|failed|known> failed. reason: unknown
2021-09-20 15:43:33,393 | adage | INFO | unsubmittable: 1 | submitted: 0 | successful: 1 | failed: 1 | total: 3 | open rules: 0 | applied rules: 3
2021-09-20 15:43:35,650 | yadage.steering_api | INFO | done. dumping workflow to disk.
2021-09-20 15:43:35,651 | recastatlas.subcomma | ERROR | caught exception
Traceback (most recent call last):
File "/home/feickert/.pyenv/versions/example/lib/python3.8/site-packages/recastatlas/backends/local.py", line 20, in run_workflow
run_workflow(**spec)
File "/home/feickert/.pyenv/versions/example/lib/python3.8/site-packages/yadage/steering_api.py", line 20, in run_workflow
pass
File "/home/feickert/.pyenv/versions/3.8.11/lib/python3.8/contextlib.py", line 120, in __exit__
next(self.gen)
File "/home/feickert/.pyenv/versions/example/lib/python3.8/site-packages/yadage/steering_api.py", line 110, in steering_ctx
execute_steering(
File "/home/feickert/.pyenv/versions/example/lib/python3.8/site-packages/yadage/steering_api.py", line 60, in execute_steering
ys.run_adage(backend)
File "/home/feickert/.pyenv/versions/example/lib/python3.8/site-packages/yadage/steering_object.py", line 100, in run_adage
adage.rundag(controller=self.controller, **self.adage_kwargs)
File "/home/feickert/.pyenv/versions/example/lib/python3.8/site-packages/adage/__init__.py", line 137, in rundag
run_polling_workflow(controller, coroutine, update_interval, trackerlist, maxsteps)
File "/home/feickert/.pyenv/versions/example/lib/python3.8/site-packages/adage/__init__.py", line 51, in run_polling_workflow
for stepnum, controller in enumerate(coroutine):
File "/home/feickert/.pyenv/versions/example/lib/python3.8/site-packages/adage/pollingexec.py", line 89, in adage_coroutine
raise RuntimeError('workflow finished but failed')
RuntimeError: workflow finished but failed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/feickert/.pyenv/versions/example/lib/python3.8/site-packages/recastatlas/subcommands/run.py", line 52, in run
run_sync(name, spec, backend=backend)
File "/home/feickert/.pyenv/versions/example/lib/python3.8/site-packages/recastatlas/backends/__init__.py", line 69, in run_sync
BACKENDS[backend].run_workflow(name,spec)
File "/home/feickert/.pyenv/versions/example/lib/python3.8/site-packages/recastatlas/backends/local.py", line 22, in run_workflow
raise FailedRunException
recastatlas.exceptions.FailedRunException
Error: Workflow failed
Exception ignored in: <function Pool.__del__ at 0x7fc93d79c8b0>
Traceback (most recent call last):
File "/home/feickert/.pyenv/versions/3.8.11/lib/python3.8/multiprocessing/pool.py", line 268, in __del__
self._change_notifier.put(None)
File "/home/feickert/.pyenv/versions/3.8.11/lib/python3.8/multiprocessing/queues.py", line 368, in put
self._writer.send_bytes(obj)
File "/home/feickert/.pyenv/versions/3.8.11/lib/python3.8/multiprocessing/connection.py", line 200, in send_bytes
self._send_bytes(m[offset:offset + size])
File "/home/feickert/.pyenv/versions/3.8.11/lib/python3.8/multiprocessing/connection.py", line 411, in _send_bytes
self._send(header + buf)
File "/home/feickert/.pyenv/versions/3.8.11/lib/python3.8/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
OSError: [Errno 9] Bad file descriptor
(example) feickert@ThinkPad-X1:/tmp$
The resulting recast
directory is attached as the following zip file (recast-64ee8b66.zip), but perhaps relevant is also the output of the eventselection.run.log
(example) feickert@ThinkPad-X1:/tmp$ cat recast-64ee8b66/eventselection/_packtivity/eventselection.run.log
2021-09-20 15:43:30,556 | pack.eventselection. | INFO | starting file logging for topic: run
2021-09-20 15:43:31,059 | pack.eventselection. | INFO | b'Configured GCC from: /opt/lcg/gcc/6.2.0binutils/x86_64-slc6'
2021-09-20 15:43:31,060 | pack.eventselection. | INFO | b'Configured AnalysisBase from: /usr/AnalysisBase/21.2.51/InstallArea/x86_64-slc6-gcc62-opt'
2021-09-20 15:43:31,182 | pack.eventselection. | INFO | b'xAOD::Init INFO Environment initialised for data access'
2021-09-20 15:43:31,182 | pack.eventselection. | INFO | b'SampleHandler with 1 files'
2021-09-20 15:43:31,182 | pack.eventselection. | INFO | b'Sample:name=sample,tags=()'
2021-09-20 15:43:31,182 | pack.eventselection. | INFO | b'https://recastwww.web.cern.ch/recastwww/data/reana-recast-demo/mc15_13TeV.123456.cap_recast_demo_signal_one.root'
2021-09-20 15:43:31,182 | pack.eventselection. | INFO | b''
2021-09-20 15:43:31,182 | pack.eventselection. | INFO | b''
2021-09-20 15:43:31,182 | pack.eventselection. | INFO | b'/build2/atnight/localbuilds/nightlies/21.2/AnalysisBase/athena/PhysicsAnalysis/D3PDTools/EventLoop/Root/Driver.cxx:107:exception: could not create output directory /tmp/recast-64ee8b66/eventselection/submitDir'
2021-09-20 15:43:31,182 | pack.eventselection. | INFO | b"terminate called after throwing an instance of 'RCU::ExceptionMsg'"
2021-09-20 15:43:31,182 | pack.eventselection. | INFO | b'what(): /build2/atnight/localbuilds/nightlies/21.2/AnalysisBase/athena/PhysicsAnalysis/D3PDTools/EventLoop/Root/Driver.cxx:107:exception: could not create output directory /tmp/recast-64ee8b66/eventselection/submitDir'
2021-09-20 15:43:31,284 | pack.eventselection. | INFO | b'bash: line 10: 227 Aborted (core dumped) myEventSelection /tmp/recast-64ee8b66/eventselection/submitDir recast_inputs.txt recast_xsecs.txt 30.0'
Dear experts,
I just wanted to try one of the great recast examples with a local workflow.
It appears that this is currently not possible out of the box.
The problem seems to be that yadage
expects a metadir
to create a YadageSteering
object here.
This metadir however is currently not set here.
Please find a minimal example here:
(base) ➜ ~ conda create -n recast-tmp python
(base) ➜ ~ conda activate recast-tmp
(recast-tmp) ➜ ~ python -m pip install recast-atlas
(recast-tmp) ➜ ~ pip install yadage
(recast-tmp) ➜ ~ recast run examples/rome --backend local
2023-11-03 11:27:11,370 | packtivity.asyncback | INFO | configured pool size to 10
2023-11-03 11:27:11,421 | recastatlas.subcomma | ERROR | caught exception
Traceback (most recent call last):
File "/Users/dnoll/anaconda3/envs/recast-tmp/lib/python3.12/site-packages/recastatlas/backends/local.py", line 21, in run_workflow
run_workflow(**spec)
File "/Users/dnoll/anaconda3/envs/recast-tmp/lib/python3.12/site-packages/yadage/steering_api.py", line 19, in run_workflow
with steering_ctx(*args, **kwargs):
File "/Users/dnoll/anaconda3/envs/recast-tmp/lib/python3.12/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "/Users/dnoll/anaconda3/envs/recast-tmp/lib/python3.12/site-packages/yadage/steering_api.py", line 87, in steering_ctx
ys = YadageSteering.create(
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dnoll/anaconda3/envs/recast-tmp/lib/python3.12/site-packages/yadage/steering_object.py", line 61, in create
prepare_meta(
File "/Users/dnoll/anaconda3/envs/recast-tmp/lib/python3.12/site-packages/yadage/utils.py", line 229, in prepare_meta
if os.path.exists(metadir):
^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen genericpath>", line 19, in exists
TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType
Is this the expected behavior?
Or am I missing something more fundamental?
Thank you for your comments!
distutils
is deprecated in Python 3.10+, as noted in the pip
v21.3
release notes
On Python 3.10 or later, the installation scheme backend has been changed to use
sysconfig
. This is to anticipate the deprecation ofdistutils
in Python 3.10, and its scheduled removal in3.12
. For compatibility considerations, pip installations running on Python 3.9 or lower will continue to usedistutils
.
To prevent bugs in the future that will cause tests to break, we should switch the distutils
functionality
recast-atlas/src/recastatlas/subcommands/catalogue.py
Lines 39 to 43 in f35c780
to setuptools
(I think it is setuptools
).
It is not good that the output folders are root-owned. Some users might not have admin rights on the PCs they use.
Cross-listing with https://gitlab.cern.ch/recast-atlas/susy/ATLAS-CONF-2018-041/-/issues/1.
As raised by @jghaley in MJB RECAST not working (atlas/atlas-conf-2018-041) on ATLAS talk, the atlas/atlas-conf-2018-041
example workflow fails with an error.
A minimal failing example (with recast-atlas
v0.1.19
) is
#!/bin/bash
export RECAST_AUTH_USERNAME=xxx
export RECAST_AUTH_PASSWORD=xxx
export RECAST_AUTH_TOKEN=xxx
eval "$(recast auth setup -a ${RECAST_AUTH_USERNAME} -a ${RECAST_AUTH_PASSWORD} -a ${RECAST_AUTH_TOKEN} -a default)"
eval "$(recast auth write --basedir authdir)"
printf '\n# recast catalogue ls\n'
recast catalogue ls
printf '\n# recast catalogue describe atlas/atlas-conf-2018-041\n'
recast catalogue describe atlas/atlas-conf-2018-041
printf '\n# recast catalogue check atlas/atlas-conf-2018-041\n'
recast catalogue check atlas/atlas-conf-2018-041
# run the workflow
TAG_NAME="debug"
if [ -d "recast-${TAG_NAME}" ];then
sudo rm -rf "recast-${TAG_NAME}"
fi
recast run atlas/atlas-conf-2018-041 --backend docker --tag "${TAG_NAME}"
c.f. https://gitlab.cern.ch/recast-atlas/susy/ATLAS-CONF-2018-041/-/issues/1 for log files.
It seems to me that the error exists with the workflow implementation in the repository or something not being properly pinned down and having things float in time and break, and not with recast-atlas
. That can be determined in the GitLab Issue.
If it is determined that the workflow is too much effort to fix, then we should remove the atlas/atlas-conf-2018-041
from the example recast catalogue (atlas_atlas_conf_2018_041.yml
).
Trying to run the tests with Python 3.9 in the CI fails with the following
$ python -m pytest tests
============================= test session starts ==============================
platform linux -- Python 3.9.7, pytest-6.2.5, py-1.10.0, pluggy-1.0.0
rootdir: /home/runner/work/recast-atlas/recast-atlas
collected 3 items
tests/test_cli.py .F. [100%]
=================================== FAILURES ===================================
_____________________________ test_run_hello_world _____________________________
tmpdir = local('/tmp/pytest-of-runner/pytest-0/test_run_hello_world0')
def test_run_hello_world(tmpdir):
with tmpdir.as_cwd():
runner = CliRunner()
test = runner.invoke(
run, ['testing/busyboxtest', '--backend', 'local', '--tag', 'hello']
)
> assert test.exit_code == 0
E assert 1 == 0
E + where 1 = <Result SystemExit(1)>.exit_code
tests/test_cli.py:20: AssertionError
------------------------------ Captured log call -------------------------------
ERROR recastatlas.subcommands.run:run.py:58 caught exception
Traceback (most recent call last):
File "/opt/hostedtoolcache/Python/3.9.7/x64/lib/python3.9/site-packages/recastatlas/subcommands/run.py", line 56, in run
run_sync(name, spec, backend=backend)
File "/opt/hostedtoolcache/Python/3.9.7/x64/lib/python3.9/site-packages/recastatlas/backends/__init__.py", line 76, in run_sync
BACKENDS[backend].run_workflow(name, spec)
KeyError: 'local'
=========================== short test summary info ============================
FAILED tests/test_cli.py::test_run_hello_world - assert 1 == 0
========================= 1 failed, 2 passed in 0.30s ==========================
Error: Process completed with exit code 1.
I'm unclear on why Python 3.8 passes but Python 3.9 fails.
For some reason I can not run the workflow. Probably because I run from a different location from the repository.
Add functionality to regularly poll the status of a job running on the REANA cluster, and automatically download the workflow outputs and 'declare job complete' once it's finishd running on REANA.
we should make sure that the important commands in rrecast work
recast backends ls --check
recast submit <analysis id. --backend reana
some of this sems to fail due to a requirement of 'jsonschema<4.0' that we haven't captured
Add a developer.md or something that describes how to do a release.
Originally posted by @matthewfeickert in #57 (comment)
I think we should not allow blank outputs as this hides errors.
Add functionality to:
on the fly when user specifies to run recast workflow with reana backend (eg. via --reana
command-line option)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.