Code Monkey home page Code Monkey logo

binder's People

Contributors

ahmadia avatar andrewosh avatar choldgraf avatar freeman-lab avatar gitter-badger avatar parente avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

binder's Issues

Support setuptools (setup.py)

Hi,
I just see this great project but I was disappointed when I see it lacks setuptools support.
I use jupyter notebooks as code examples for my libraries. As I assume that the user has installed my library all imports of my code in the notebook didn't take account of the path of this code...
I could modify my notebooks to make it work with binder, but I think it could be very convenient for many people too add simple "python setup.py install in your docker file.

Proxy target missing

hi,

currently, whenever open a link to my project in mybinder icon, I got 'Proxy target missing'. I need to refresh page to make it work. Is this your intention?

Feature request: please add the ability to run from a gist

I find binder awesome!

It would be very nice to be able to run a binder not only from a GitHub repo, but also from a GitHub gist.

In the simpler case, it would be enough to consider "single file" gists and run off the base image, but nothing forbids to treat a "multi file" gist like a repo and look for requirements.txt and Dockerfile in it.

Extract parameters into a config file

The following values in binder/settings.py should be moved into a better config file:

ROOT = os.environ["BINDER_HOME"]
DOCKER_HUB_USER = "andrewosh"
REGISTRY_NAME = "gcr.io/generic-notebooks"

LOG_FILE = "/var/log/binder"
LOG_LEVEL = logging.INFO

APP_CRON_PERIOD = 5

Also, there is a similar set of parameters in web/app.py that should be moved into the same config file:

# TODO move settings into a config file
PORT = 8080
NUM_WORKERS = 10
PRELOAD = True
QUEUE_SIZE = 50
ALLOW_ORIGIN = True

Add more minimal binder-base docker images

Will be nice to have the freedom to chose between a selection of acceptable linux base images to be used as starting point for binder.

At the moment the binder-base image, which is based on debian:jessie and anaconda is pretty heavy (~3.8 GB).

Using a minimal docker image like baseimage-docker and miniconda3 or system-python to run the jupyter notebook can reduce dramatically the image size down to less than 500 MB (including kernels for py2, py3, bash, julia, R)

The advantage in using such minimal images, will not be just the size-reduction but also, the freedom to run other linux distribution.

In my use-case, I found baseimage-docker which is based on ubuntu 14 (LTE) optimal because of the "more permissive packaging policy” in ubuntu which make much easier to have an extra repository (PPA) with up to date packages.

Add support for private repos

It would be useful if people could build Binders for private repos. This is especially common in educational settings, where repos need to be kept private to prevent students from cheating on each other (or to prevent cheating by students in the following year!)

This should just requires letting the web app / build system clone private repos. Let's try to spec out here what that should look like.

cc @rahuldave

Debugging feature request: container logs on error

I'm pretty sure I managed to bork my container when building with a Dockerfile. It would be helpful for debugging purposes in this case to see the container logs, just like I can see the builder logs while my image is being built.

Jupyer terminal?

I don't see a lot of benefits from leaving the terminal interface from Jupyter exposed in http://app.mybinder.org/###/terminals/1. I can't think of any immediate harm, but I'm sure someone else might!

handling python 2 vs 3

I recently wasted some time trying to figure out why I could not import a dependency on binder which I was sure was being installed in the requirements.txt.

The problem was that I developed the notebook locally with python 3. So the ipython notebook on binder loaded the python 3 kernel. Which tried importing python 3 dependencies on binder. Which were not there, because only the python 2 versions of dependencies are installed from requirements.txt file.

company proxy

Hi there, great work. I wanted to show to some colleagues this possibility (this is a GREAT way to share code with non-python users) and my repo test works perfectly at home, but not at work. the notebook is opened correctly but the connection fails almost immediately. I strongly suspect this is related to the company proxy. Are you aware about that? Is there any workaround? Let me know if I can help maybe with some testing.
Again, great tool!

Error with xgboost import

I am importing xgboost and have mentioned xgboost in a requirements.txt file. But getting the following error while running from the binder on the import line:

OSError: /home/main/anaconda/lib/python2.7/site-packages/xgboost/./wrapper/libxgboostwrapper.so: invalid ELF header

I do not get this error on my local machine. This has something to do with inclusion of compilers, particularly g++. Does the binder not have them or whats seems to be the issue here?

Usage from JavaScript contexts

In order for thebe-like contexts to use binder, we'll need to be able to set CORS headers and deal with the websocket origin handling within the built containers (ipython notebook configuration).

Then you can start building things like:

For the built images with notebook servers, this involves a configuration that includes:

--NotebookApp.allow_origin='*' # Likely some restricted subset

For your tornado application itself, you'd also want to be able to set the origin on configuration (as you're doing now in web/app.py).

How you could do all this in a programmatic way I'm unsure of (configuration works, but it would be nice if this too was API controllable).

Lock down inter-binder communication

As described by @rgbkrk on the Jupyter mailing list post (https://groups.google.com/forum/#!topic/jupyter/7Q9G5LULc-I) , it's possible to discover the unique app IDs of all running deployments very easily:

main@notebook-server:~$ ip address                                                                                                 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default                                                  
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00                                                                          
    inet 127.0.0.1/8 scope host lo                                                                                                 
       valid_lft forever preferred_lft forever                                                                                     
    inet6 ::1/128 scope host                                                                                                       
       valid_lft forever preferred_lft forever                                                                                     
117: eth0: <BROADCAST,UP,LOWER_UP> mtu 1460 qdisc noqueue state UP group default                                                   
    link/ether 02:42:0a:f4:01:3a brd ff:ff:ff:ff:ff:ff                                                                             
    inet 10.244.1.58/24 scope global eth0                                                                                          
       valid_lft forever preferred_lft forever                                                                                     
    inet6 fe80::42:aff:fef4:13a/64 scope link                                                                                      
       valid_lft forever preferred_lft forever                                                                                     
main@notebook-server:~$ ip=10.244.1.56; curl -s $ip:8888 | grep tree | sed 's".*/\(\w*\)/tree.*"'$ip':8888/\1"'                    
10.244.1.56:8888/3321585198                                                                                                        
main@notebook-server:~$ ip=10.244.1.57; curl -s $ip:8888 | grep tree | sed 's".*/\(\w*\)/tree.*"'$ip':8888/\1"'                    
10.244.1.57:8888/2905876898                                                                                                        
main@notebook-server:~$ ip=10.244.1.58; curl -s $ip:8888 | grep tree | sed 's".*/\(\w*\)/tree.*"'$ip':8888/\1"'                    
10.244.1.58:8888/2987902139 

Since we don't have any authentication in the notebook, we depend on these IDs being difficult to discover.

Looking into Kubernetes namespace isolation (https://groups.google.com/forum/#!topic/google-containers/bb7HWMz_9iQ), it appears that while Kubernetes does not support this directly, though they plan to in the future, there are a few modifications we could make to add networking access-control policies. Specifically, the Calico Networks team has created a Kubernetes plugin that enables namespace-level network isolation (http://www.projectcalico.org/calico-network-policy-comes-to-kubernetes/), and the openshift-sdn team (https://github.com/openshift/openshift-sdn) has done something similar. Both might be suitable, but alternatives should be investigated.

Improper parsing of FROM line in docker files

If a Dockerfile has no extra newline after the FROM line, when binder parses the FROM line it will end up concatenating the following line, e.g.:

FROM andrewosh/binder-base
MAINTAINER Jessica B. Hamrick <[email protected]>

ends up with the FROM and MAINTAINER lines being concatenated, which then causes the build to fail.

Failed builds: 0 bytes log and replacement of a working image with a broken one

Hi,

congrats for the new design of the "feed" page: it looks great.
Today we encountered an issue building an image from a Dockerfile. The repository is https://github.com/cernphsft/rootbinder .
Two probably undesired things happened:

  1. When downloading the logs from the build page in order to debug the problem, an empty file was obtained.
  2. The broken image replaced the existing but working one.

I would consider 1) low priority while 2) is probably a little worse.

Cheers,
Danilo

python3.4 vs python3.3

Hi,

currently binder only support python3.3 and python2.7. What do you think about switching 3.3 to 3.4?

IMO, 3.4 is more 'mainstream' and newer.
(This is actually from my experience building my Python package with conda. It's likely that python3.3 is not as well supported as 3.4 with conda).

Improve logging

Currently the binder/log.py file is not used, and so all logging is done through print statements to stdout. Ideally, we should use the appropriate helper functions in binder/log.py to do proper logging (respecting log levels, configurable log files, etc.).

Additionally, the Builder class (in web/builder.py) does very coarse stdout/stderr redirection for the child processes that are launched to do the builds. The output of grandchild processes, which is where most of the meat is, is not recorded in the log files.

This comment also included front-end changes that will make it easier to debug builds that are failing. During a build, the full stdout/stderr should be displayed in a Travis-style output window on the build page, and the failure page can perhaps display a more succinct error message.

Add Python3 support

Requested by several users. We could update the base image to use Python 3, or maintain two base images (one for Py2 and one for Py3) and add this as an option during binder building.

Specify specific subdirectory for index.ipynb or starting folder

In the case where someone wants to use a binder to demonstrate the capabilities of a library in the repo of that library, it is unlikely that the example notebooks would be in the root directory of the repository. Besides, the requirement.txt file can have a meaning for the main package itself.

It would be interesting to let the user optionally specify the root directory of the binder.

Add support for container pools

As described by @rgbkrk in #8 , we want to build an API that supports creating pre-launched pools of whitelisted images. This task is going to require maintaining a mapping from deployed binders (with existing app IDs) some value stating whether that binder is already allocated (do we need to include additional info?).

This mapping will be maintained in the MongoDB database added by #4

Can't install bash_kernel => ImportError: No module named jupyter_client.kernelspec

I'd like to install the bash_kernel module but this is failing at the bash_kernel.install step.
See error below.

I already use jupyter for many activities, one of which is creating command-line tutorials (well actually I haven't started, though I have used it for some Docker and OpenStack client stuff from the command-line).

I guess having people run a bash_kernel on binder might set off alarm bells ... how secure/insecure would that be?
No more insecure than using the "!" and "%%bash" magics ...
Anyway, if you don't see the need to block such access please help.

The module installation goes fine:
pip install --no-cache-dir bash_kernel

But the "install" step
python -m bash_kernel.install
produces the following error:

Traceback (most recent call last):
  File "/home/main/anaconda/lib/python2.7/runpy.py", line 162, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "/home/main/anaconda/lib/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "/home/main/anaconda/lib/python2.7/site-packages/bash_kernel/install.py", line 5, in 
    from jupyter_client.kernelspec import install_kernel_spec
ImportError: No module named jupyter_client.kernelspec

I notice that there isn't a ~/.jupyter directory.

I manually created a ~/.ipython/kernels/bash/kernel.json file from a Python notebook using (A HACK!):

    !mkdir -p /home/main/.ipython/kernels/bash/
    !echo '{"argv": ["/home/main/anaconda/python", "-m", "bash_kernel", "-f", "{connection_file}"], "codemirror_mode": "shell", "display_name": "Bash", "env": {"PS1": "$"}, "language": "bash"}' | tee  /home/main/.ipython/kernels/bash/kernel.json

This enabled bash as a kernel menu option ... but the kernel crashes on startup.

I consider this to be a binder killer app ...

Use a database

We should add a database (probably using MongoDB) that at a minimum logs binder submissions for each repo, and probably keeps track of a few other things related to builds.

Private cluster support

As πŸ‘ 'd by a number of people in the Gitter (@rahuldave, @mcburton, and @ctb, in yesterday's discussion), there's a variety of use-cases for Binder that can't be supported by the public cluster due to resource limitations, and where private cluster support becomes critical. Some examples:

  1. Supporting a large class, where a large number of slots (approaching the number available on the public cluster!) need to be reliably available.
  2. Giving substantial resources to external services, such as giving each deployment access to 10 Spark workers (the public cluster only creates a single Spark worker/master, which is only useful for demo purposes)
  3. Needing finer control over hardware details (node size, storage...), such as if your binder needs to take full advantage of a multi-core environment (@mrocklin)
  4. Testing. It hasn't been much of an issue yet, but right now there's only a single test cluster to run PRs on....'nuff said.

While supporting private clusters on the Google Compute Engine would be very straightforward, as that's what the public cluster is running on, there are a few issues to figure out before this will be possible on a wider variety of cloud providers, or on internal clusters:

  1. GCE-specific features - We've tried to minimize the number of GCE-specific features we use, but we are using Google's container registry for image hosting, and GCE's load balancers for supporting our external Kubernetes services (the proxy lookup/registration services). The image registry wouldn't be an issue if kubernetes/kubernetes#1319 were completed, but spinning up a private registry isn't much trouble either (http://kubernetes.io/v1.0/docs/user-guide/images.html#using-a-private-registry). The load balancers are currently not being used, but as cluster size increases the configurable-http-proxy might face websocket issues (@rgbkrk), and load balancing between multiple proxies might be necessary.
  2. Startup scripts - The Kubernetes launch scripts vary greatly across cloud providers, and Binder might want to add light modifications to these scripts, like we currently do for GCE. This would be very cumbersome to manage as the number of supported cloud providers grows, so one alternative is to pass Kubernetes setup to the user, like:
    1. Download Kubernetes, modify the startup scripts to your liking (setting node size, etc.)
    2. Run the Binder docker container, which on launch will ask for a Kubernetes directory and cloud provider credentials (if necessary)
    3. Call binder cluster start, which implicitly calls kube-up, and we assume everything's been configured properly.
  3. Proxying - Right now, all GitHub badges will redirect to app.mybinder.org, which points directly to the proxy lookup service. Other possible way of handling proxying:
  4. mybinder.org maintains an index of private clusters, and app.mybinder.org routes requests to the correct proxy lookup service. This is more complicated, but there are definitely benefits to a federated approach.
  5. All private clusters are completely independent, and when the API server image is launched, it asks for an IP or a URL for proxy lookup service.

I'm probably (definitely) overlooking a few things, so any feedback/thoughts would be appreciated!

Add timeout to build process

We've had at least one case where a build failed to finish but also never reported an error, although this shouldn't happen, we should add a catchall timeout for builds just in case, as it's frustrating to be stuck with a repo and unable to rebuild.

example binder-base-derived Dockerfile that launches xvfb?

Hi all, Thanks again for binder - it's great!

Do you guys happen to know of a Dockerfile that is derived from binder-base which launches xvfb? I'm trying to use a graphics package that requires an xserver, and can't get xvfb working in Binder (though I do have it working in travis for the project). Here's my latest attempt at getting this working.

Add a loading page

There should be a loading page displayed as soon as someone hits the mybinder.org/repo/*/* end point, while being redirected, until their binder is ready.

matplotlib pip dependency freetype is missing

Try the repo boltomli/ThinkDSP, use pip install -r requirements.txt

Part of the log below:

2015-11-24 05:09:07,680 INFO: - App: Collecting matplotlib==1.5.0 (from -r requirements.txt (line 14))
2015-11-24 05:09:08,154 INFO: - App: Downloading matplotlib-1.5.0.tar.gz (54.0MB)
2015-11-24 05:09:20,765 INFO: - App: Complete output from command python setup.py egg_info:
2015-11-24 05:09:20,768 INFO: - App: IMPORTANT WARNING:
2015-11-24 05:09:20,771 INFO: - App: pkg-config is not installed.
2015-11-24 05:09:20,776 INFO: - App: matplotlib may not be able to find some of its dependencies
2015-11-24 05:09:20,778 INFO: - App: ============================================================================
2015-11-24 05:09:20,780 INFO: - App: Edit setup.cfg to change the build options
2015-11-24 05:09:20,782 INFO: - App:
2015-11-24 05:09:20,785 INFO: - App: BUILDING MATPLOTLIB
2015-11-24 05:09:20,787 INFO: - App: matplotlib: yes [1.5.0]
2015-11-24 05:09:20,788 INFO: - App: python: yes [2.7.10 |Anaconda 2.3.0 (64-bit)| (default, May
2015-11-24 05:09:20,792 INFO: - App: 28 2015, 17:02:03) [GCC 4.4.7 20120313 (Red Hat
2015-11-24 05:09:20,794 INFO: - App: 4.4.7-1)]]
2015-11-24 05:09:20,798 INFO: - App: platform: yes [linux2]
2015-11-24 05:09:20,800 INFO: - App:
2015-11-24 05:09:20,802 INFO: - App: REQUIRED DEPENDENCIES AND EXTENSIONS
2015-11-24 05:09:20,804 INFO: - App: numpy: yes [version 1.9.2]
2015-11-24 05:09:20,806 INFO: - App: dateutil: yes [using dateutil version 2.4.2]
2015-11-24 05:09:20,808 INFO: - App: pytz: yes [using pytz version 2015.4]
2015-11-24 05:09:20,810 INFO: - App: cycler: yes [cycler was not found. pip will attempt to
2015-11-24 05:09:20,812 INFO: - App: install it after matplotlib.]
2015-11-24 05:09:20,814 INFO: - App: tornado: yes [using tornado version 4.2]
2015-11-24 05:09:20,816 INFO: - App: pyparsing: yes [using pyparsing version 2.0.3]
2015-11-24 05:09:20,819 INFO: - App: libagg: yes [pkg-config information for 'libagg' could not
2015-11-24 05:09:20,825 INFO: - App: be found. Using local copy.]
2015-11-24 05:09:20,833 INFO: - App: freetype: no [The C/C++ header for freetype2 (ft2build.h)
2015-11-24 05:09:20,848 INFO: - App: could not be found. You may need to install the
2015-11-24 05:09:20,851 INFO: - App: development package.]
2015-11-24 05:09:20,853 INFO: - App: png: yes [version 1.6.17]
2015-11-24 05:09:20,871 INFO: - App: qhull: yes [pkg-config information for 'qhull' could not be
2015-11-24 05:09:20,876 INFO: - App: found. Using local copy.]
2015-11-24 05:09:20,888 INFO: - App:
2015-11-24 05:09:20,895 INFO: - App: OPTIONAL SUBPACKAGES
2015-11-24 05:09:20,897 INFO: - App: sample_data: yes [installing]
2015-11-24 05:09:20,908 INFO: - App: toolkits: yes [installing]
2015-11-24 05:09:20,913 INFO: - App: tests: yes [using nose version 1.3.7 / using mock 1.0.1]
2015-11-24 05:09:20,916 INFO: - App: toolkits_tests: yes [using nose version 1.3.7 / using mock 1.0.1]
2015-11-24 05:09:20,918 INFO: - App:
2015-11-24 05:09:20,920 INFO: - App: OPTIONAL BACKEND EXTENSIONS
2015-11-24 05:09:20,928 INFO: - App: macosx: no [Mac OS-X only]
2015-11-24 05:09:20,930 INFO: - App: qt5agg: no [PyQt5 not found]
2015-11-24 05:09:20,937 INFO: - App: qt4agg: yes [installing, Qt: 4.8.6, PyQt: 4.8.6; PySide not
2015-11-24 05:09:20,940 INFO: - App: found]
2015-11-24 05:09:20,942 INFO: - App: gtk3agg: no [Requires pygobject to be installed.]
2015-11-24 05:09:20,944 INFO: - App: gtk3cairo: no [Requires pygobject to be installed.]
2015-11-24 05:09:20,946 INFO: - App: gtkagg: no [Requires pygtk]
2015-11-24 05:09:20,948 INFO: - App: tkagg: yes [installing, version 81008]
2015-11-24 05:09:20,951 INFO: - App: wxagg: no [requires wxPython]
2015-11-24 05:09:20,953 INFO: - App: gtk: no [Requires pygtk]
2015-11-24 05:09:20,955 INFO: - App: agg: yes [installing]
2015-11-24 05:09:20,957 INFO: - App: cairo: yes [installing, pycairo version 1.10.0]
2015-11-24 05:09:20,961 INFO: - App: windowing: no [Microsoft Windows only]
2015-11-24 05:09:20,963 INFO: - App:
2015-11-24 05:09:20,965 INFO: - App: OPTIONAL LATEX DEPENDENCIES
2015-11-24 05:09:20,967 INFO: - App: dvipng: no
2015-11-24 05:09:20,969 INFO: - App: ghostscript: no
2015-11-24 05:09:20,971 INFO: - App: latex: no
2015-11-24 05:09:20,973 INFO: - App: pdftops: no
2015-11-24 05:09:20,975 INFO: - App:
2015-11-24 05:09:20,977 INFO: - App: OPTIONAL PACKAGE DATA
2015-11-24 05:09:20,979 INFO: - App: dlls: no [skipping due to configuration]
2015-11-24 05:09:20,982 INFO: - App:
2015-11-24 05:09:20,984 INFO: - App: ============================================================================
2015-11-24 05:09:20,986 INFO: - App: * The following required packages can not be built:
2015-11-24 05:09:20,989 INFO: - App: * freetype

Check repo contents

At some point during submission, we should use GitHub's API to query the contents of the repo and make sure that the requested assets are present (e.g. there is in fact a requirements.txt file if that was the option selected). We may be able to do this purely client-side within the web app.

Packages from environment.yml not installed into python3 env

I have a directory with Python 3 notebooks, and I cannot directly run them even though the dependencies are listed in the corresponding environment.yml. This is not a big problem, since I can just instruct users to install them again, but it's a bit annoying.

This is the repository:

https://github.com/Juanlu001/poliastro/blob/master/environment.yml

And these are the available packages in each environment:

https://gist.github.com/Juanlu001/0abb2f5a09d8b549da08

Building from environment.yml fails - possibly outdated `conda`

I'm trying to build a relatively simple binder, where the conda env was created with conda create --yes --name seaborn_factorplot_pointplot seaborn pandas ipython but am running into build failures. At first I thought it was that conda was outdated and not finding all the packages (e.g. appnope), so I wrote a Dockerfile to both update conda and create an environment, but then the Docker build wasn't finding the environment.yml file. It might be as simple as a spelling error but I double checked that.

Here's a gist with the three builds I tried and the latest Dockerfile and environment.yml. And here's the repo I'm trying to build.

May be related to this issue about supporting Docker.

API Design for binder resources

As promised during a video chat yesterday, I'm going to post about API design that I've badly wished for after running tmpnb for long enough. Some of this relates to how thebe and other javascript frontends use this type of environment and some of it is about making operations and operational insight easier (as well as automated).

There are three main resources for a REST API here:

  • binders - create a new binder which is a specification/template for a collection of resources
  • containers - spawn containers by binderID|Name, list currently running containers
  • pools - pre-allocate and view details about current pools of running binder containers

Some of these operations should have authorization, depending on their usage.

Remove Dockerfile option

As discussed with @rgbkrk in a Jupyter dev meeting, for security reasons we should probably remove the ability to specify a binder with a custom Dockerfile, which we currently support so long as the Dockerfile builds on top of our base image. Although it provides an incredibly flexible deployment model, there are too many potential pitfalls with the freedom it provides. The question is, can we satisfy our various use cases without it?

We currently support requirements.txt and conda environment.yml, which together should cover all Python-related builds. We can also add support for other kernels (e.g. R and Julia), and we can add the appropriate package dependency lists for those languages. For Julia, there appears to be a convention of specifying dependencies in a REQUIRE file. Less clear what the appropriate convention should be for R. Comments from R or Julia devs would be welcome on this point!

Is this enough, or can folks suggest use cases for Binder where the Dockerfile is a must have?

Also, to be clear, we will still be using Docker under the hood to build the underlying images! This is just a question of how we expose the configuration options to users.

cc @andrewosh @arokem

Uh oh! Your Binder failed to build.

hi,

thanks for nice project. I got the error above when trying to build my package. The notice 'Uh oh! Your Binder failed to build' is not really informative. Where could I find the detail of the failing? thanks.

Hai

Julia 0.4

Is is possible to run Julia 0.4 ? The base image is based on Debian Jessie, and following the example installs julia 0.3.2

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.