This jupyter notebook docker instance builds and deploy's a dockerized jupyter notebook server, containing basic python3, numpy/scipy, pandas/seaborn, tensorflow and keras, and more essential packages for creating, analyzing, and modeling a wide variety of data sets.
- Original documentation for jupyter's
docker-compose
example - Original documentation for jupyter's
tensorflow-notebook
Build and run a jupyter/tensorflow-notebook
container on a VirtualBox VM on local desktop.
# create a Docker Machine-controlled VirtualBox VM
bin/vbox.sh mymachine
# activate the docker machine
eval "$(docker-machine env mymachine)"
# build the notebook image on the machine
notebook/build.sh
# bring up the notebook container
notebook/up.sh
To stop and remove the container:
notebook/down.sh
The basic install maps a work/
volume for persistent storage in the docker container. This will be visible in the jupyter notebook as work/
.
I've also added another mount point, PROJECT_DIR
. This is set in the notebook/env.sh
file, and points to the default value specified in that document. This maps to a projects/
folder in the root directory of the jupyter notebook server file system.
It can also be changed to any folder you would like by setting it as an environment variable when running up.sh
.
PROJECT_DIR="/home/jovyan/data" notebook/up.sh
There is an optional mechanism for creating a docker machine. Use at your own risk.
Yes. Set environment variables to specify unique names and ports when running the up.sh
command.
NAME=my-notebook PORT=9000 notebook/up.sh
NAME=your-notebook PORT=9001 notebook/up.sh
To stop and remove the containers:
NAME=my-notebook notebook/down.sh
NAME=your-notebook notebook/down.sh
The up.sh
creates a Docker volume named after the notebook container with a -work
suffix, e.g., my-notebook-work
.
Yes. Set the WORK_VOLUME
environment variable to the same value for each notebook.
NAME=my-notebook PORT=9000 WORK_VOLUME=our-work notebook/up.sh
NAME=your-notebook PORT=9001 WORK_VOLUME=our-work notebook/up.sh
To run the notebook server with a self-signed certificate, pass the --secure
option to the up.sh
script. You must also provide a password, which will be used to secure the notebook server. You can specify the password by setting the PASSWORD
environment variable, or by passing it to the up.sh
script.
PASSWORD=a_secret notebook/up.sh --secure
# or
notebook/up.sh --secure --password a_secret
Sure. If you want to secure access to publicly addressable notebook containers, you can generate a free certificate using the Let's Encrypt service.
This example includes the bin/letsencrypt.sh
script, which runs the letsencrypt
client to create a full-chain certificate and private key, and stores them in a Docker volume. Note: The script hard codes several letsencrypt
options, one of which automatically agrees to the Let's Encrypt Terms of Service.
The following command will create a certificate chain and store it in a Docker volume named mydomain-secrets
.
FQDN=host.mydomain.com [email protected] \
SECRETS_VOLUME=mydomain-secrets \
bin/letsencrypt.sh
Now run up.sh
with the --letsencrypt
option. You must also provide the name of the secrets volume and a password.
PASSWORD=a_secret SECRETS_VOLUME=mydomain-secrets notebook/up.sh --letsencrypt
# or
notebook/up.sh --letsencrypt --password a_secret --secrets mydomain-secrets
Be aware that Let's Encrypt has a pretty low rate limit per domain at the moment. You can avoid exhausting your limit by testing against the Let's Encrypt staging servers. To hit their staging servers, set the environment variable CERT_SERVER=--staging
.
FQDN=host.mydomain.com [email protected] \
CERT_SERVER=--staging \
bin/letsencrypt.sh
Also, be aware that Let's Encrypt certificates are short lived (90 days). If you need them for a longer period of time, you'll need to manually setup a cron job to run the renewal steps. (You can reuse the command above.)
Yes, you should be able to deploy to any Docker Machine-controlled host. To make it easier to get up and running, this example includes scripts to provision Docker Machines to VirtualBox and IBM SoftLayer, but more scripts are welcome!
To create a Docker machine using a VirtualBox VM on local desktop:
bin/vbox.sh mymachine
To create a Docker machine using a virtual device on IBM SoftLayer:
export SOFTLAYER_USER=my_softlayer_username
export SOFTLAYER_API_KEY=my_softlayer_api_key
export SOFTLAYER_DOMAIN=my.domain
# Create virtual device
bin/softlayer.sh myhost
# Add DNS entry (SoftLayer DNS zone must exist for SOFTLAYER_DOMAIN)
bin/sl-dns.sh myhost
- Everything in Scipy Notebook
- Tensorflow and Keras for Python 3.x (without GPU support)
The following command starts a container with the Notebook server listening for HTTP connections on port 8888 with a randomly generated authentication token configured.
docker run -it --rm -p 8888:8888 jupyter/tensorflow-notebook
Take note of the authentication token included in the notebook startup log messages. Include it in the URL you visit to access the Notebook server or enter it in the Notebook login form.
As distributed tensorflow is still immature, we currently only provide the single machine mode.
import tensorflow as tf
hello = tf.Variable('Hello World!')
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
sess.run(hello)
The Docker container executes a start-notebook.sh
script script by default. The start-notebook.sh
script handles the NB_UID
, NB_GID
and GRANT_SUDO
features documented in the next section, and then executes the jupyter notebook
.
You can pass Jupyter command line options through the start-notebook.sh
script when launching the container. For example, to secure the Notebook server with a custom password hashed using IPython.lib.passwd()
instead of the default token, run the following:
docker run -d -p 8888:8888 jupyter/tensorflow-notebook start-notebook.sh --NotebookApp.password='sha1:74ba40f8a388:c913541b7ee99d15d5ed31d4226bf7838f83a50e'
For example, to set the base URL of the notebook server, run the following:
docker run -d -p 8888:8888 jupyter/tensorflow-notebook start-notebook.sh --NotebookApp.base_url=/some/path
For example, to disable all authentication mechanisms (not a recommended practice):
docker run -d -p 8888:8888 jupyter/tensorflow-notebook start-notebook.sh --NotebookApp.token=''
You can sidestep the start-notebook.sh
script and run your own commands in the container. See the Alternative Commands section later in this document for more information.
You may customize the execution of the Docker container and the command it is running with the following optional arguments.
-e GEN_CERT=yes
- Generates a self-signed SSL certificate and configures Jupyter Notebook to use it to accept encrypted HTTPS connections.-e NB_UID=1000
- Specify the uid of thejovyan
user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with--user root
. (Thestart-notebook.sh
script willsu jovyan
after adjusting the user id.)-e NB_GID=100
- Specify the gid of thejovyan
user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with--user root
. (Thestart-notebook.sh
script willsu jovyan
after adjusting the group id.)-e GRANT_SUDO=yes
- Gives thejovyan
user passwordlesssudo
capability. Useful for installing OS packages. For this option to take effect, you must run the container with--user root
. (Thestart-notebook.sh
script willsu jovyan
after addingjovyan
to sudoers.) You should only enablesudo
if you trust the user or if the container is running on an isolated host.-v /some/host/folder/for/work:/home/jovyan/work
- Mounts a host machine directory as folder in the container. Useful when you want to preserve notebooks and other work even after the container is destroyed. You must grant the within-container notebook user or group (NB_UID
orNB_GID
) write access to the host directory (e.g.,sudo chown 1000 /some/host/folder/for/work
).
You may mount SSL key and certificate files into a container and configure Jupyter Notebook to use them to accept HTTPS connections. For example, to mount a host folder containing a notebook.key
and notebook.crt
:
docker run -d -p 8888:8888 \
-v /some/host/folder:/etc/ssl/notebook \
jupyter/tensorflow-notebook start-notebook.sh \
--NotebookApp.keyfile=/etc/ssl/notebook/notebook.key
--NotebookApp.certfile=/etc/ssl/notebook/notebook.crt
Alternatively, you may mount a single PEM file containing both the key and certificate. For example:
docker run -d -p 8888:8888 \
-v /some/host/folder/notebook.pem:/etc/ssl/notebook.pem \
jupyter/tensorflow-notebook start-notebook.sh \
--NotebookApp.certfile=/etc/ssl/notebook.pem
In either case, Jupyter Notebook expects the key and certificate to be a base64 encoded text file. The certificate file or PEM may contain one or more certificates (e.g., server, intermediate, and root).
For additional information about using SSL, see the following:
- The docker-stacks/examples for information about how to use Let's Encrypt certificates when you run these stacks on a publicly visible domain.
- The jupyter_notebook_config.py file for how this Docker image generates a self-signed certificate.
- The Jupyter Notebook documentation for best practices about running a public notebook server in general, most of which are encoded in this image.
The default Python 3.x Conda environment resides in /opt/conda
.
The commands jupyter
, ipython
, python
, pip
, and conda
(among others) are available in both environments. For convenience, you can install packages into either environment regardless of what environment is currently active using commands like the following:
# install a package into the default (python 3.x) environment
pip install some-package
conda install some-package
The start.sh
script supports the same features as the default start-notebook.sh
script (e.g., GRANT_SUDO
), but allows you to specify an arbitrary command to execute. For example, to run the text-based ipython
console in a container, do the following:
docker run -it --rm jupyter/tensorflow-notebook start.sh ipython
Or, to run JupyterLab instead of the classic notebook, run the following:
docker run -it --rm -p 8888:8888 jupyter/tensorflow-notebook start.sh jupyter lab
This script is particularly useful when you derive a new Dockerfile from this image and install additional Jupyter applications with subcommands like jupyter console
, jupyter kernelgateway
, etc.
You can bypass the provided scripts and specify your an arbitrary start command. If you do, keep in mind that certain features documented above will not function (e.g., GRANT_SUDO
).