binder-project / binder Goto Github PK
View Code? Open in Web Editor NEWreproducible executable environments
Home Page: http://mybinder.readthedocs.io
reproducible executable environments
Home Page: http://mybinder.readthedocs.io
Hi,
I just see this great project but I was disappointed when I see it lacks setuptools support.
I use jupyter notebooks as code examples for my libraries. As I assume that the user has installed my library all imports of my code in the notebook didn't take account of the path of this code...
I could modify my notebooks to make it work with binder, but I think it could be very convenient for many people too add simple "python setup.py install in your docker file.
Is there a way to link to a particular notebook hosted on mybinder?
hi,
currently, whenever open a link to my project in mybinder icon, I got 'Proxy target missing'. I need to refresh page to make it work. Is this your intention?
Using the conda environment.yml mode for this binder: http://mybinder.org/repo/arokem/white-matter-matters, from this file: https://github.com/arokem/white-matter-matters/blob/master/environment.yml
The conda dependencies do get installed (so import vtk
works), but pip dependencies are not there ( import dipy
throws an error).
How can one use binder for packages with binary extensions? For example https://github.com/eendebakpt/oapackage uses binary extensions with swig (which is not avaible for binder I believe). Can binder use one of the pre-build wheel packages? If so, then which wheel packages should be available for binder.
I find binder awesome!
It would be very nice to be able to run a binder not only from a GitHub repo, but also from a GitHub gist.
In the simpler case, it would be enough to consider "single file" gists and run off the base image, but nothing forbids to treat a "multi file" gist like a repo and look for requirements.txt
and Dockerfile
in it.
Python 3.5.0 is out and Anaconda 2.4.0 has it so it would be great to have the base image updated to use it.
The following values in binder/settings.py
should be moved into a better config file:
ROOT = os.environ["BINDER_HOME"]
DOCKER_HUB_USER = "andrewosh"
REGISTRY_NAME = "gcr.io/generic-notebooks"
LOG_FILE = "/var/log/binder"
LOG_LEVEL = logging.INFO
APP_CRON_PERIOD = 5
Also, there is a similar set of parameters in web/app.py
that should be moved into the same config file:
# TODO move settings into a config file
PORT = 8080
NUM_WORKERS = 10
PRELOAD = True
QUEUE_SIZE = 50
ALLOW_ORIGIN = True
Will be nice to have the freedom to chose between a selection of acceptable linux base images to be used as starting point for binder.
At the moment the binder-base image, which is based on debian:jessie and anaconda is pretty heavy (~3.8 GB).
Using a minimal docker image like baseimage-docker and miniconda3 or system-python to run the jupyter notebook can reduce dramatically the image size down to less than 500 MB (including kernels for py2, py3, bash, julia, R)
The advantage in using such minimal images, will not be just the size-reduction but also, the freedom to run other linux distribution.
In my use-case, I found baseimage-docker which is based on ubuntu 14 (LTE) optimal because of the "more permissive packaging policyβ in ubuntu which make much easier to have an extra repository (PPA) with up to date packages.
Is it possible to trigger the regeneration of the binder each time a repo gets changes merged in its master branch?
It would be useful if people could build Binders for private repos. This is especially common in educational settings, where repos need to be kept private to prevent students from cheating on each other (or to prevent cheating by students in the following year!)
This should just requires letting the web app / build system clone private repos. Let's try to spec out here what that should look like.
cc @rahuldave
I'm pretty sure I managed to bork my container when building with a Dockerfile. It would be helpful for debugging purposes in this case to see the container logs, just like I can see the builder logs while my image is being built.
I don't see a lot of benefits from leaving the terminal interface from Jupyter exposed in http://app.mybinder.org/###/terminals/1
. I can't think of any immediate harm, but I'm sure someone else might!
I recently wasted some time trying to figure out why I could not import a dependency on binder which I was sure was being installed in the requirements.txt.
The problem was that I developed the notebook locally with python 3. So the ipython notebook on binder loaded the python 3 kernel. Which tried importing python 3 dependencies on binder. Which were not there, because only the python 2 versions of dependencies are installed from requirements.txt file.
Hi there, great work. I wanted to show to some colleagues this possibility (this is a GREAT way to share code with non-python users) and my repo test works perfectly at home, but not at work. the notebook is opened correctly but the connection fails almost immediately. I strongly suspect this is related to the company proxy. Are you aware about that? Is there any workaround? Let me know if I can help maybe with some testing.
Again, great tool!
I am importing xgboost and have mentioned xgboost in a requirements.txt file. But getting the following error while running from the binder on the import line:
OSError: /home/main/anaconda/lib/python2.7/site-packages/xgboost/./wrapper/libxgboostwrapper.so: invalid ELF header
I do not get this error on my local machine. This has something to do with inclusion of compilers, particularly g++. Does the binder not have them or whats seems to be the issue here?
In order for thebe-like contexts to use binder, we'll need to be able to set CORS headers and deal with the websocket origin handling within the built containers (ipython notebook configuration).
Then you can start building things like:
s/environment/binder
, noting that the notebooks don't need to be packaged up in this context)For the built images with notebook servers, this involves a configuration that includes:
--NotebookApp.allow_origin='*' # Likely some restricted subset
For your tornado application itself, you'd also want to be able to set the origin on configuration (as you're doing now in web/app.py
).
How you could do all this in a programmatic way I'm unsure of (configuration works, but it would be nice if this too was API controllable).
As described by @rgbkrk on the Jupyter mailing list post (https://groups.google.com/forum/#!topic/jupyter/7Q9G5LULc-I) , it's possible to discover the unique app IDs of all running deployments very easily:
main@notebook-server:~$ ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
117: eth0: <BROADCAST,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP group default
link/ether 02:42:0a:f4:01:3a brd ff:ff:ff:ff:ff:ff
inet 10.244.1.58/24 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:aff:fef4:13a/64 scope link
valid_lft forever preferred_lft forever
main@notebook-server:~$ ip=10.244.1.56; curl -s $ip:8888 | grep tree | sed 's".*/\(\w*\)/tree.*"'$ip':8888/\1"'
10.244.1.56:8888/3321585198
main@notebook-server:~$ ip=10.244.1.57; curl -s $ip:8888 | grep tree | sed 's".*/\(\w*\)/tree.*"'$ip':8888/\1"'
10.244.1.57:8888/2905876898
main@notebook-server:~$ ip=10.244.1.58; curl -s $ip:8888 | grep tree | sed 's".*/\(\w*\)/tree.*"'$ip':8888/\1"'
10.244.1.58:8888/2987902139
Since we don't have any authentication in the notebook, we depend on these IDs being difficult to discover.
Looking into Kubernetes namespace isolation (https://groups.google.com/forum/#!topic/google-containers/bb7HWMz_9iQ), it appears that while Kubernetes does not support this directly, though they plan to in the future, there are a few modifications we could make to add networking access-control policies. Specifically, the Calico Networks team has created a Kubernetes plugin that enables namespace-level network isolation (http://www.projectcalico.org/calico-network-policy-comes-to-kubernetes/), and the openshift-sdn team (https://github.com/openshift/openshift-sdn) has done something similar. Both might be suitable, but alternatives should be investigated.
If a Dockerfile has no extra newline after the FROM
line, when binder parses the FROM
line it will end up concatenating the following line, e.g.:
FROM andrewosh/binder-base
MAINTAINER Jessica B. Hamrick <[email protected]>
ends up with the FROM
and MAINTAINER
lines being concatenated, which then causes the build to fail.
Hi,
congrats for the new design of the "feed" page: it looks great.
Today we encountered an issue building an image from a Dockerfile. The repository is https://github.com/cernphsft/rootbinder .
Two probably undesired things happened:
I would consider 1) low priority while 2) is probably a little worse.
Cheers,
Danilo
Hi,
currently binder only support python3.3 and python2.7. What do you think about switching 3.3 to 3.4?
IMO, 3.4 is more 'mainstream' and newer.
(This is actually from my experience building my Python package with conda. It's likely that python3.3 is not as well supported as 3.4 with conda).
Some graphics libraries require a display/frame buffer to be present. For example, VTK will crash the kernel if run headless. Any chance to get an "xvfb" service?
There should be a service that monitors for new node creation and preloads all existing images onto those nodes.
Currently the binder/log.py
file is not used, and so all logging is done through print
statements to stdout. Ideally, we should use the appropriate helper functions in binder/log.py
to do proper logging (respecting log levels, configurable log files, etc.).
Additionally, the Builder class (in web/builder.py
) does very coarse stdout/stderr redirection for the child processes that are launched to do the builds. The output of grandchild processes, which is where most of the meat is, is not recorded in the log files.
This comment also included front-end changes that will make it easier to debug builds that are failing. During a build, the full stdout/stderr should be displayed in a Travis-style output window on the build page, and the failure page can perhaps display a more succinct error message.
Requested by several users. We could update the base image to use Python 3, or maintain two base images (one for Py2 and one for Py3) and add this as an option during binder building.
In the case where someone wants to use a binder to demonstrate the capabilities of a library in the repo of that library, it is unlikely that the example notebooks would be in the root directory of the repository. Besides, the requirement.txt
file can have a meaning for the main package itself.
It would be interesting to let the user optionally specify the root directory of the binder.
As described by @rgbkrk in #8 , we want to build an API that supports creating pre-launched pools of whitelisted images. This task is going to require maintaining a mapping from deployed binders (with existing app IDs) some value stating whether that binder is already allocated (do we need to include additional info?).
This mapping will be maintained in the MongoDB database added by #4
Along with the current support for requirements.txt
(simple Python dependencies) and Dockerfile
, we should probably add support for the conda
environment.yml
specification. This has come up in the Jupyter mailing list (https://groups.google.com/forum/#!topic/jupyter/7Q9G5LULc-I) as an appealing way to specify conda
dependencies, as well as enable easier specification of optional kernels.
Iβm testing a widget for the jupyter project,
it works fine on a local and remote installation of the jupyter notebook but on binder it hangs when try to load the widget.
The the notebook is available online and runs on binder see the repository
You can replicate the issue running the CesiumWidget Example.ipynb
(it will hang at the second code cell)
This are the logs from the js console:
https://gist.github.com/epifanio/eb74c4188cfe2be1a9ee
I'd like to install the bash_kernel module but this is failing at the bash_kernel.install step.
See error below.
I already use jupyter for many activities, one of which is creating command-line tutorials (well actually I haven't started, though I have used it for some Docker and OpenStack client stuff from the command-line).
I guess having people run a bash_kernel on binder might set off alarm bells ... how secure/insecure would that be?
No more insecure than using the "!" and "%%bash" magics ...
Anyway, if you don't see the need to block such access please help.
The module installation goes fine:
pip install --no-cache-dir bash_kernel
But the "install" step
python -m bash_kernel.install
produces the following error:
Traceback (most recent call last): File "/home/main/anaconda/lib/python2.7/runpy.py", line 162, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/home/main/anaconda/lib/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/home/main/anaconda/lib/python2.7/site-packages/bash_kernel/install.py", line 5, in from jupyter_client.kernelspec import install_kernel_spec ImportError: No module named jupyter_client.kernelspec
I notice that there isn't a ~/.jupyter directory.
I manually created a ~/.ipython/kernels/bash/kernel.json file from a Python notebook using (A HACK!):
!mkdir -p /home/main/.ipython/kernels/bash/ !echo '{"argv": ["/home/main/anaconda/python", "-m", "bash_kernel", "-f", "{connection_file}"], "codemirror_mode": "shell", "display_name": "Bash", "env": {"PS1": "$"}, "language": "bash"}' | tee /home/main/.ipython/kernels/bash/kernel.json
This enabled bash as a kernel menu option ... but the kernel crashes on startup.
I consider this to be a binder killer app ...
I'm just disappointed the logo is not Borromean rings :-)
https://en.wikipedia.org/wiki/Borromean_rings
π
(feel free to close the issue)
In a pull request around adding a requirements.txt to the ipwidgets repo to let me/one run the examples on binder, @jdfreder asked the following question
Are either of you familiar enough with the mechanics of Github and binder to know if the binder update could be triggered on merge?
Is there a way to configure binder to respond to GitHub webhooks?
It might be useful to do a recursive clone here https://github.com/binder-project/binder/blob/master/binder/app.py#L103. This would allow a single git repo to aggregate notebooks from many other repos into a single binder image by including them as git submodules.
We should add a database (probably using MongoDB) that at a minimum logs binder submissions for each repo, and probably keeps track of a few other things related to builds.
As π 'd by a number of people in the Gitter (@rahuldave, @mcburton, and @ctb, in yesterday's discussion), there's a variety of use-cases for Binder that can't be supported by the public cluster due to resource limitations, and where private cluster support becomes critical. Some examples:
While supporting private clusters on the Google Compute Engine would be very straightforward, as that's what the public cluster is running on, there are a few issues to figure out before this will be possible on a wider variety of cloud providers, or on internal clusters:
binder cluster start
, which implicitly calls kube-up
, and we assume everything's been configured properly.app.mybinder.org
, which points directly to the proxy lookup service. Other possible way of handling proxying:mybinder.org
maintains an index of private clusters, and app.mybinder.org
routes requests to the correct proxy lookup service. This is more complicated, but there are definitely benefits to a federated approach.I'm probably (definitely) overlooking a few things, so any feedback/thoughts would be appreciated!
We've had at least one case where a build failed to finish but also never reported an error, although this shouldn't happen, we should add a catchall timeout for builds just in case, as it's frustrating to be stuck with a repo and unable to rebuild.
Hi all, Thanks again for binder - it's great!
Do you guys happen to know of a Dockerfile that is derived from binder-base
which launches xvfb? I'm trying to use a graphics package that requires an xserver, and can't get xvfb working in Binder (though I do have it working in travis for the project). Here's my latest attempt at getting this working.
There should be a loading page displayed as soon as someone hits the mybinder.org/repo/*/*
end point, while being redirected, until their binder is ready.
This would be perfect for people to use gh-pages
or any other weird setup.
Try the repo boltomli/ThinkDSP, use pip install -r requirements.txt
Part of the log below:
2015-11-24 05:09:07,680 INFO: - App: Collecting matplotlib==1.5.0 (from -r requirements.txt (line 14))
2015-11-24 05:09:08,154 INFO: - App: Downloading matplotlib-1.5.0.tar.gz (54.0MB)
2015-11-24 05:09:20,765 INFO: - App: Complete output from command python setup.py egg_info:
2015-11-24 05:09:20,768 INFO: - App: IMPORTANT WARNING:
2015-11-24 05:09:20,771 INFO: - App: pkg-config is not installed.
2015-11-24 05:09:20,776 INFO: - App: matplotlib may not be able to find some of its dependencies
2015-11-24 05:09:20,778 INFO: - App: ============================================================================
2015-11-24 05:09:20,780 INFO: - App: Edit setup.cfg to change the build options
2015-11-24 05:09:20,782 INFO: - App:
2015-11-24 05:09:20,785 INFO: - App: BUILDING MATPLOTLIB
2015-11-24 05:09:20,787 INFO: - App: matplotlib: yes [1.5.0]
2015-11-24 05:09:20,788 INFO: - App: python: yes [2.7.10 |Anaconda 2.3.0 (64-bit)| (default, May
2015-11-24 05:09:20,792 INFO: - App: 28 2015, 17:02:03) [GCC 4.4.7 20120313 (Red Hat
2015-11-24 05:09:20,794 INFO: - App: 4.4.7-1)]]
2015-11-24 05:09:20,798 INFO: - App: platform: yes [linux2]
2015-11-24 05:09:20,800 INFO: - App:
2015-11-24 05:09:20,802 INFO: - App: REQUIRED DEPENDENCIES AND EXTENSIONS
2015-11-24 05:09:20,804 INFO: - App: numpy: yes [version 1.9.2]
2015-11-24 05:09:20,806 INFO: - App: dateutil: yes [using dateutil version 2.4.2]
2015-11-24 05:09:20,808 INFO: - App: pytz: yes [using pytz version 2015.4]
2015-11-24 05:09:20,810 INFO: - App: cycler: yes [cycler was not found. pip will attempt to
2015-11-24 05:09:20,812 INFO: - App: install it after matplotlib.]
2015-11-24 05:09:20,814 INFO: - App: tornado: yes [using tornado version 4.2]
2015-11-24 05:09:20,816 INFO: - App: pyparsing: yes [using pyparsing version 2.0.3]
2015-11-24 05:09:20,819 INFO: - App: libagg: yes [pkg-config information for 'libagg' could not
2015-11-24 05:09:20,825 INFO: - App: be found. Using local copy.]
2015-11-24 05:09:20,833 INFO: - App: freetype: no [The C/C++ header for freetype2 (ft2build.h)
2015-11-24 05:09:20,848 INFO: - App: could not be found. You may need to install the
2015-11-24 05:09:20,851 INFO: - App: development package.]
2015-11-24 05:09:20,853 INFO: - App: png: yes [version 1.6.17]
2015-11-24 05:09:20,871 INFO: - App: qhull: yes [pkg-config information for 'qhull' could not be
2015-11-24 05:09:20,876 INFO: - App: found. Using local copy.]
2015-11-24 05:09:20,888 INFO: - App:
2015-11-24 05:09:20,895 INFO: - App: OPTIONAL SUBPACKAGES
2015-11-24 05:09:20,897 INFO: - App: sample_data: yes [installing]
2015-11-24 05:09:20,908 INFO: - App: toolkits: yes [installing]
2015-11-24 05:09:20,913 INFO: - App: tests: yes [using nose version 1.3.7 / using mock 1.0.1]
2015-11-24 05:09:20,916 INFO: - App: toolkits_tests: yes [using nose version 1.3.7 / using mock 1.0.1]
2015-11-24 05:09:20,918 INFO: - App:
2015-11-24 05:09:20,920 INFO: - App: OPTIONAL BACKEND EXTENSIONS
2015-11-24 05:09:20,928 INFO: - App: macosx: no [Mac OS-X only]
2015-11-24 05:09:20,930 INFO: - App: qt5agg: no [PyQt5 not found]
2015-11-24 05:09:20,937 INFO: - App: qt4agg: yes [installing, Qt: 4.8.6, PyQt: 4.8.6; PySide not
2015-11-24 05:09:20,940 INFO: - App: found]
2015-11-24 05:09:20,942 INFO: - App: gtk3agg: no [Requires pygobject to be installed.]
2015-11-24 05:09:20,944 INFO: - App: gtk3cairo: no [Requires pygobject to be installed.]
2015-11-24 05:09:20,946 INFO: - App: gtkagg: no [Requires pygtk]
2015-11-24 05:09:20,948 INFO: - App: tkagg: yes [installing, version 81008]
2015-11-24 05:09:20,951 INFO: - App: wxagg: no [requires wxPython]
2015-11-24 05:09:20,953 INFO: - App: gtk: no [Requires pygtk]
2015-11-24 05:09:20,955 INFO: - App: agg: yes [installing]
2015-11-24 05:09:20,957 INFO: - App: cairo: yes [installing, pycairo version 1.10.0]
2015-11-24 05:09:20,961 INFO: - App: windowing: no [Microsoft Windows only]
2015-11-24 05:09:20,963 INFO: - App:
2015-11-24 05:09:20,965 INFO: - App: OPTIONAL LATEX DEPENDENCIES
2015-11-24 05:09:20,967 INFO: - App: dvipng: no
2015-11-24 05:09:20,969 INFO: - App: ghostscript: no
2015-11-24 05:09:20,971 INFO: - App: latex: no
2015-11-24 05:09:20,973 INFO: - App: pdftops: no
2015-11-24 05:09:20,975 INFO: - App:
2015-11-24 05:09:20,977 INFO: - App: OPTIONAL PACKAGE DATA
2015-11-24 05:09:20,979 INFO: - App: dlls: no [skipping due to configuration]
2015-11-24 05:09:20,982 INFO: - App:
2015-11-24 05:09:20,984 INFO: - App: ============================================================================
2015-11-24 05:09:20,986 INFO: - App: * The following required packages can not be built:
2015-11-24 05:09:20,989 INFO: - App: * freetype
Hi,
I started submit building job yesterday but it has not finished yet. Do you know why? thanks
http://mybinder.org/repo/hainm/notebook-pytraj
Stop at 'Preloading app image onto all nodes...'
At some point during submission, we should use GitHub's API to query the contents of the repo and make sure that the requested assets are present (e.g. there is in fact a requirements.txt
file if that was the option selected). We may be able to do this purely client-side within the web app.
I see that pip is 7.0.3. Maybe run a pip install --upgrade pip
in the container?
Hi,
I have observed that sometimes Binder uses old versions of our container image when spawning a new container. For instance, I just saw the issue in this container:
http://app.mybinder.org/1947057649/notebooks/index.ipynb
for the cernphsft/rootbinder image. That version is quite an old one, and it has been replaced (rebuilt) already several times.
Thank you,
Enric
I have a directory with Python 3 notebooks, and I cannot directly run them even though the dependencies are listed in the corresponding environment.yml
. This is not a big problem, since I can just instruct users to install them again, but it's a bit annoying.
This is the repository:
https://github.com/Juanlu001/poliastro/blob/master/environment.yml
And these are the available packages in each environment:
I'm trying to build a relatively simple binder, where the conda env was created with conda create --yes --name seaborn_factorplot_pointplot seaborn pandas ipython
but am running into build failures. At first I thought it was that conda
was outdated and not finding all the packages (e.g. appnope
), so I wrote a Dockerfile
to both update conda and create an environment, but then the Docker build wasn't finding the environment.yml
file. It might be as simple as a spelling error but I double checked that.
Here's a gist with the three builds I tried and the latest Dockerfile
and environment.yml
. And here's the repo I'm trying to build.
May be related to this issue about supporting Docker.
As promised during a video chat yesterday, I'm going to post about API design that I've badly wished for after running tmpnb for long enough. Some of this relates to how thebe and other javascript frontends use this type of environment and some of it is about making operations and operational insight easier (as well as automated).
There are three main resources for a REST API here:
binders
- create a new binder which is a specification/template for a collection of resourcescontainers
- spawn containers by binderID|Name
, list currently running containerspools
- pre-allocate and view details about current pools of running binder containersSome of these operations should have authorization, depending on their usage.
As discussed with @rgbkrk in a Jupyter dev meeting, for security reasons we should probably remove the ability to specify a binder with a custom Dockerfile
, which we currently support so long as the Dockerfile builds on top of our base image. Although it provides an incredibly flexible deployment model, there are too many potential pitfalls with the freedom it provides. The question is, can we satisfy our various use cases without it?
We currently support requirements.txt
and conda
environment.yml
, which together should cover all Python-related builds. We can also add support for other kernels (e.g. R
and Julia
), and we can add the appropriate package dependency lists for those languages. For Julia, there appears to be a convention of specifying dependencies in a REQUIRE
file. Less clear what the appropriate convention should be for R. Comments from R or Julia devs would be welcome on this point!
Is this enough, or can folks suggest use cases for Binder where the Dockerfile
is a must have?
Also, to be clear, we will still be using Docker under the hood to build the underlying images! This is just a question of how we expose the configuration options to users.
hi,
thanks for nice project. I got the error above when trying to build my package. The notice 'Uh oh! Your Binder failed to build' is not really informative. Where could I find the detail of the failing? thanks.
Hai
Is is possible to run Julia 0.4 ? The base image is based on Debian Jessie, and following the example installs julia 0.3.2
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.