Code Monkey home page Code Monkey logo

future.batchjobs's Introduction

future.BatchJobs: A Future API for Parallel and Distributed Processing using BatchJobs

Life cycle: superseded

NOTE: The BatchJobs package is deprecated in favor of the batchtools package. Because of this, it is recommended to use the future.batchtools package instead of this package. This future.BatchJobs package is formally deprecated and is archived on CRAN as of 2021-01-08.

Introduction

The future package provides a generic API for using futures in R. A future is a simple yet powerful mechanism to evaluate an R expression and retrieve its value at some point in time. Futures can be resolved in many different ways depending on which strategy is used. There are various types of synchronous and asynchronous futures to choose from in the future package.

This package, future.BatchJobs, provides a type of futures that utilizes the BatchJobs package. This means that any type of backend that the BatchJobs package supports can be used as a future. More specifically, future.BatchJobs will allow you or users of your package to leverage the compute power of high-performance computing (HPC) clusters via a simple switch in settings - without having to change any code at all.

For instance, if BatchJobs is properly configures, the below two expressions for futures x and y will be processed on two different compute nodes:

> library("future.BatchJobs")
> plan(batchjobs_custom)
>
> x %<-% { Sys.sleep(5); 3.14 }
> y %<-% { Sys.sleep(5); 2.71 }
> x + y
[1] 5.85

This is obviously a toy example to illustrate what futures look like and how to work with them.

A more realistic example comes from the field of cancer research where very large data FASTQ files, which hold a large number of short DNA sequence reads, are produced. The first step toward a biological interpretation of these data is to align the reads in each sample (one FASTQ file) toward the human genome. In order to speed this up, we can have each file be processed by a separate compute node and each node we can use 24 parallel processes such that each process aligns a separate chromosome. Here is an outline of how this nested parallelism could be implemented using futures.

library("future")
library("listenv")
## The first level of futures should be submitted to the
## cluster using BatchJobs.  The second level of futures
## should be using multiprocessing, where the number of
## parallel processes is automatically decided based on
## what the cluster grants to each compute node.
plan(list(batchjobs_custom, multisession))

## Find all samples (one FASTQ file per sample)
fqs <- dir(pattern="[.]fastq$")

## The aligned results are stored in BAM files
bams <- listenv()

## For all samples (FASTQ files) ...
for (ss in seq_along(fqs)) {
  fq <- fqs[ss]

  ## ... use futures to align them ...
  bams[[ss]] %<-% {
    bams_ss <- listenv()
	## ... and for each FASTQ file use a second layer
	## of futures to align the individual chromosomes
    for (cc in 1:24) {
      bams_ss[[cc]] %<-% htseq::align(fq, chr=cc)
    }
	## Resolve the "chromosome" futures and return as a list
    as.list(bams_ss)
  }
}
## Resolve the "sample" futures and return as a list
bams <- as.list(bams)

Note that a user who do not have access to a cluster could use the same script processing samples sequentially and chromosomes in parallel on a single machine using:

plan(list(sequential, multisession))

or samples in parallel and chromosomes sequentially using:

plan(list(multisession, sequential))

For an introduction as well as full details on how to use futures, please consult the package vignettes of the future package.

Choosing BatchJobs backend

The future.BatchJobs package implements a generic future wrapper for all BatchJobs backends. Below are the most common types of BatchJobs backends.

Backend Description Alternative in future package
generic:
batchjobs_custom Uses custom BatchJobs configuration script files, e.g. .BatchJobs.R N/A
predefined:
batchjobs_torque Futures are evaluated via a TORQUE / PBS job scheduler N/A
batchjobs_slurm Futures are evaluated via a Slurm job scheduler N/A
batchjobs_sge Futures are evaluated via a Sun/Oracle Grid Engine (SGE) job scheduler N/A
batchjobs_lsf Futures are evaluated via a Load Sharing Facility (LSF) job scheduler N/A
batchjobs_openlava Futures are evaluated via an OpenLava job scheduler N/A
batchjobs_interactive synchronous evaluation in the calling R environment plan(transparent)
batchjobs_local synchronous evaluation in a separate R process (on current machine) plan(cluster, workers="localhost")

In addition to the above, there is also batchjobs_multicore (which on Windows and Solaris falls back to batchjobs_local), which runs BatchJobs tasks asynchronously in background R sessions (sic!) on the current machine. We advise to not use this and instead use multisession of the future package. For details, see help("batchjobs_multicore").

Examples

Below are two examples illustrating how to use batchjobs_custom and batchjobs_torque to configure the BatchJobs backend. For further details and examples on how to configure BatchJobs, see the BatchJobs configuration wiki page.

Example: A .BatchJobs.R file using local BatchJobs

The most general way of configuring BatchJobs is via a .BatchJobs.R file. This file should be located in the current directory or in the user's home directory. For example, as an alternative to batchjobs_local, we can manually configure local BatchJobs futures a .BatchJobs.R file that contains

cluster.functions <- makeClusterFunctionsLocal()

This will then be found and used when specifying

> plan(batchjobs_custom)

To specify this BatchJobs configuration file explicitly, one can use

> plan(batchjobs_custom, pathname="./.BatchJobs.R")

This follows the naming convention set up by the BatchJobs package.

Example: A .BatchJobs.*.tmpl template file for TORQUE / PBS

To configure BatchJobs for job schedulers we need to setup a template file that is used to generate the script used by the scheduler. This is what a template file for TORQUE / PBS may look like:

## Job name:
#PBS -N <%= job.name %>

## Merge standard error and output:
#PBS -j oe

## Resource parameters:
<% for (name in names(resources)) { %>
#PBS -l <%= name %>=<%= resources[[name]] %>
<% } %>

## Run R:
R CMD BATCH --no-save --no-restore "<%= rscript %>" /dev/stdout

If this template is saved to file .BatchJobs.torque.tmpl in the working directory or the user's home directory, then it will be automatically located and loaded when doing:

> plan(batchjobs_torque)

Resource parameters can be specified via argument resources which should be a named list and is passed as is to the template file. For example, to request that each job would get allotted 12 cores (one a single machine) and up to 5 GiB of memory, use:

> plan(batchjobs_torque, resources=list(nodes="1:ppn=12", vmem="5gb"))

To specify the resources argument at the same time as using nested future strategies, one can use tweak() to tweak the default arguments. For instance,

plan(list(
  tweak(batchjobs_torque, resources=list(nodes="1:ppn=12", vmem="5gb")),
  multisession
))

causes the first level of futures to be submitted via the TORQUE job scheduler requesting 12 cores and 5 GiB of memory per job. The second level of futures will be evaluated using multiprocessing using the 12 cores given to each job by the scheduler.

A similar filename format is used for the other types of job schedulers supported. For instance, for Slurm the template file should be named .BatchJobs.slurm.tmpl in order for

> plan(batchjobs_slurm)

to locate the file automatically. To specify this template file explicitly, use argument pathname, e.g.

> plan(batchjobs_slurm, pathname="./.BatchJobs.slurm.tmpl")

Note that it is still possible to use a .BatchJobs.R and load the template file using a standard BatchJobs approach for maximum control. For further details and examples on how to configure BatchJobs per se, see the BatchJobs configuration wiki page.

Demos

The future package provides a demo using futures for calculating a set of Mandelbrot planes. The demo does not assume anything about what type of futures are used. The user has full control of how futures are evaluated. For instance, to use local BatchJobs futures, run the demo as:

library("future.BatchJobs")
plan(batchjobs_local)
demo("mandelbrot", package="future", ask=FALSE)

Installation

R package future.BatchJobs is only available via GitHub and can be installed in R as:

remotes::install_github("HenrikBengtsson/future.BatchJobs", ref="master")

Pre-release version

To install the pre-release version that is available in Git branch develop on GitHub, use:

remotes::install_github("HenrikBengtsson/future.BatchJobs", ref="develop")

This will install the package from source.

Contributions

This Git repository uses the Git Flow branching model (the git flow extension is useful for this). The develop branch contains the latest contributions and other code that will appear in the next release, and the master branch contains the code of the latest release.

Contributing to this package is easy. Just send a pull request. When you send your PR, make sure develop is the destination branch on the future.BatchJobs repository. Your PR should pass R CMD check --as-cran, which will also be checked by Travis CI and AppVeyor CI when the PR is submitted.

We abide to the Code of Conduct of Contributor Covenant.

Software status

Resource GitHub GitHub Actions Travis CI AppVeyor CI
Platforms: Multiple Multiple Linux & macOS Windows
R CMD check Build status Build status
Test coverage Coverage Status

future.batchjobs's People

Contributors

henrikbengtsson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

future.batchjobs's Issues

backend(<pathname>)

Make it possible to specify a BatchJobs backend by specifying the pathname to it's configuration script, e.g.

backend('~/.BatchJobs.R')
backend('~/.BatchJobs,cluster.R')
backend('configs/.BatchJobs,cluster.R')

and so on. Aliases should work automagically, e.g.

backend(cluster='configs/.BatchJobs,cluster.R')
backend('cluster')

Add LazyFuture using delayedAssign() - just for demonstration

Add LazyFuture using delayedAssign() - just for demonstration. This is basically already in place by the old code, but lets write it up such that there could be a very bare bone Future API package, e.g. future with LazyFuture as the built-in "future", which can be used in practice but also for other developers to look at.

`%<=%` (or delayedAsyncAssign()) assigns AsyncTask to the wrong environment

%<=% (or delayedAsyncAssign()) assigns AsyncTask to the wrong environment, causing them to be overwritten. For example:

> x <- listenv() ## same with new.env()
> y <- listenv() ## same with new.env()
> x$a %<=% 1
> y$a %<=% 2  ## Same task name!

> stopifnot(!identical(inspect(x$a), inspect(y$a)))
Error: !identical(inspect(x$a), inspect(y$a)) is not TRUE

> inspect(x$a)
AsyncTask:
Expression:
  [1] 2
Status: 'done', 'started', 'submitted'
Backend:
Job registry:  async13548092
  Number of jobs:  1
  Files dir: x:/async/.async/async13548092-files
  Work dir: x:/async
  Multiple result files: FALSE
  Seed: 250509008
  Required packages: BatchJobs

> inspect(y$a)
AsyncTask:
Expression:
  [1] 2
Status: 'done', 'started', 'submitted'
Backend:
Job registry:  async13548092
  Number of jobs:  1
  Files dir: x:/async/.async/async13548092-files
  Work dir: x:/async
  Multiple result files: FALSE
  Seed: 250509008
  Required packages: BatchJobs

but the asynchronous values are correct in the end:

> x$a
[1] 1
> y$a
[1] 2

GLOBALS: findGlobals() incorrectly detects 'x' as a global in subset(df, x < 3)

findGlobals() incorrectly detects field as a global in subset(df, x < 3), e.g.

> library(async)
> df <- data.frame(x=1:5, y=(1:5)^2)
> small %<=% { subset(df, x < 3) }
Error in FUN("x"[[1L]], ...) : object 'x' not found
}

# Workaround
The workaround is not to use `subset()`, e.g.
```r
> library(async)
> df <- data.frame(x=1:5, y=(1:5)^2)
> small %<=% { keep <- (df$x < 3); df[keep,] }
> small
  x y
1 1 1
2 2 4
}

Add argument 'backend' to tempRegistry()

Add argument 'backend' to tempRegistry() to specify what type of BatchJobs 'backend to use. Also add it to BatchJobsAsyncTask() and pass it down to tempRegistry(). Finally add ... to async() and pass down.

This should allow for

plan(async, backend="multicore")
...

Create a separate 'globals' package?

The problem of finding global / unknown variables is independent of 'async' package and a solution would be useful elsewhere / for other packages.

Debugging: Record BatchJobs cluster functions in task object

To help debugging/troubleshooting, record BatchJobs cluster functions in task object. It looks like these are not recorded in the Registry, but only come in play when you submitJobs() is called.

So, these should be recorded by run(), e.g.

conf <- BatchJobs%:::%getBatchJobsConf()
cf <- BatchJobs%:::%getClusterFunctions(conf)

GLOBALS: Add protection against exporting too large objects by mistake

Add protection against exporting too large objects by mistake. Make it possible to specify warn and error limits via options, e.g.

maxSizeOfGlobals <- getOption("async::maxSizeOfGlobals"=100e6)  # 100 MB
if (totalExportSize > maxSizeOfGlobals) {
  throw("The total size of all objects needed to be exported by the asynchronous
         expression exceeds the maximum allowed size (option 'maxSizeOfGlobals'):
         %g bytes > %g bytes", totalExportSize, maxSizeOfGlobals)
}

Add multicore future based on the 'parallel' package

Add multicore future based on the 'parallel' package, e.g.

multicore <- function (expr, envir=parent.frame(), substitute=TRUE, ...) {
    if (substitute) expr <- substitute(expr)
    future <- MulticoreFuture(expr=expr, envir=envir, substitute=FALSE)
    future <- run(future)
    future
}

where run() for MulticoreFuture uses p <- parallel::mcparallel() and value() uses parallel::mccollect(p, wait=TRUE).

From example(mcparallel, package="parallel"):

p <- mcparallel(1:10)
q <- mcparallel(1:20)
# wait for both jobs to finish and collect all results
res <- mccollect(list(p, q))

Maybe this will so simple that it could be part of the future package?

AsyncListEnv extending listenv

Extend existing listenv to AsyncTaskList to hold AsyncTask elements. Implement methods status(), finished(), value(), error() etc. that return a named vector of the same length as the list.

For example:

x <- AsyncListEnv()
for (ii in 1:3) {
  name <- letters[ii]
  x[[name]] %<=% { ii^2 }
}

such that for instance:

> length(x)
[1] 3

> names(x)
[1] "a" "b" "c"

> finished(x)
    a     b     c
FALSE  TRUE FALSE

> error()
    a     b     c
FALSE FALSE FALSE

> value(x[-1])
b c
4 9

LIMITATION: It is not possible to export operators, replacement functions etc.

Due to BatchJobs limitations, it is currently not possible to export operators, replacement functions etc. For example:

batchExport(reg, li=list("names<-"=NA))
Error in batchExport(reg, li=list("names<-"=NA))
  Assertion on 'li' failed: Vector must be named according to R's variable naming rules

and

batchExport(reg, li=list("%>%"=NA))
Error in batchExport(reg, li = list(`%>%` = NA)) :
  Assertion on 'li' failed: Vector must be named according to R's variable naming rules

This will be solved as soon as BatchJobs Issue #93 is resolved.

Separate .async/ subdirectory for each R session

Currently the async package puts all BatchJobs directories directly under ./async/. If one run several R sessions each utilizing async, then it is very hard to know which BatchJobs directory belongs to which R session. If one of the R session is aborted, it will fail to garbage collect meaning all its BatchJobs directories will also remain.

Suggestion: Have each R session use a unique ./async/async_<unique_id>/ directory.

CLEANUP: Move BatchJobs work directories to ./async/<id>/

Now the async BatchJobs work directories are in the current directory, e.g. ./async983970872-files. This tend to clutter up the working directory quite a bit. Instead, move them to a subdirectory, e.g. ./.async/983970872-files/.

backend() needs to called explicitly to override the default setup when package is loaded

Problem

The backend() functions needs to be called explicitly to override the default setup when package is loaded. It is not possible to override this using plan(batchjobs, backend=...) function.

Example

> library("async")

## Explicitly set backend
> backend("multicore=4")
[1] "multicore=4"
> plan(batchjobs)

> Sys.getpid()
[1] 406326

> pid %<=% { Sys.getpid() }
> pid
[1] 407866
> pid %<=% { Sys.getpid() }
> pid
[1] 408592

## Try to change backend via plan()
> plan(batchjobs, backend="interactive")
> pid %<=% { Sys.getpid() }
> pid
[1] 410531
## Hmm... a different one than the main R process
> stopifnot(pid == Sys.getpid())
Error: pid == Sys.getpid() is not TRUE

Troubleshooting

This is because internally run() for BatchJobsAsyncTask calls BatchJobs::submitJobs() which in turn uses conf <- BatchJobs:::getBatchJobsConf(). The latter is set by our backend(), which is

> backend()
[1] "multicore=4"

We are currently temporarily setting backend(backend) when a BatchJobsAsyncTask future is created, or more precisely when the BatchJobs register is created using async:::tempRegistry(). We need to do this in run() instead.

FACT: backend("multicore=2"): Two jobs at the time can run asynchroneously before (a)wait

Example:

library("R.utils")
library("async")
backend("multicore=2")
## [1] "multicore=2"

env <- listenv()
t0 <- Sys.time()
for (ii in 1:10) {
  t1 <- Sys.time()
  printf("Task %d @ %.2fs - ", ii, t1-t0)
  env[[ii]] %<=% { Sys.sleep(5); 1000+ii }
  printf("%.2fs\n", Sys.time()-t1)
}
printf("%.2fs: complete!\n", Sys.time() - t0)

## Task 1 @ 0.00s - 0.95s
## Task 2 @ 0.95s - 0.74s
## Task 3 @ 1.70s - 1 temporary submit errors logged to file ## '/home/henrik/.async/async1610630674-files/submit.log'.
## First message: Multicore busy: R
## 11.02s
## Task 4 @ 12.71s - 0.69s
## Task 5 @ 13.40s - 1 temporary submit errors logged to file ## '/home/henrik/.async/async1330253475-files/submit.log'.
## First message: Multicore busy: R
## 11.13s
## Task 6 @ 24.54s - 0.75s
## Task 7 @ 25.29s - 1 temporary submit errors logged to file ## '/home/henrik/.async/async1662451945-files/submit.log'.
## First message: Multicore busy: R
## 10.95s
## Task 8 @ 36.24s - 0.75s
## Task 9 @ 37.00s - 0.83s
## Task 10 @ 37.82s - 1 temporary submit errors logged to file ## '/home/henrik/.async/async1856376469-files/submit.log'.
## First message: Multicore busy: R
## 11.02s
## 48.84s: complete!

str(as.list(env))
## List of 10
##  $ : num 1001
##  $ : num 1002
##  $ : num 1003
##  $ : num 1004
##  $ : num 1005
##  $ : num 1006
##  $ : num 1007
##  $ : num 1008
##  $ : num 1009
##  $ : num 1010

QUESTION: Instead of stalling the main/interactive R process, can we have another cheap background process launch each of them and wait?

Rename this package to BatchJobsFuture?

This package is solely based on the BatchJobs package/framework. We can now use it with the 'future' package (by specifying future::plan()). Other future mechanisms will be implemented elsewhere/in other packages. Thus, is 'async' too much of a generic name? It's certainly catchy.

Lazy futures: Add argument to specify how/when globals are resolved?

Background / current status

## A global variable
a <- 0
f <- lazy({
  42 * a
})

## Since 'a' is a global variable in _lazy_ future 'f',
## which still hasn't been resolved, any changes to
## 'a' until 'f' is resolved, will affect its value.
a <- 1
v <- value(f)
print(v)
stopifnot(v == 42) 

Proposal

Should there be an option to specify when globals in lazy futures should be resolved, e.g.

a <- 0
f <- lazy({
  42 * a
}, globals="eager")
a <- 1
v <- value(f)
print(v)
stopifnot(v == 0) 

versus (current):

a <- 0
f <- lazy({
  42 * a
}, globals="lazy")
a <- 1
v <- value(f)
print(v)
stopifnot(v == 42) 

LIMITATIONS: It is not possible to export globals starting with a period

It is not possible to export globals starting with a period, e.g.

> library(async)
> plan(batchjobs, backend="local")
> a <- 1
> x %<=% { 2 * a }
> x
[1] 2
> .b <- 2
> x %<=% { 2 * a * .b }
Error in asyncBatchEvalQ(reg, exprs = list(task$expr), envir = task$envir,  :
  BatchJobs does not support exported variables that start with a period
  (see https://github.com/tudo-r/BatchJobs/issues/103): '.b'

See tudo-r/BatchJobs#103 for troubleshooting and details.

GLOBALS: findGlobals() on locally defined S3 methods (fails)

Background

findGlobals() on locally defined S3 methods identifies the generic functions but not the corresponding S3 methods. This is not surprising.

As long as the S3 methods are implemented in packages and those packages are loaded/attached, calling the generic function - as found by findGlobals() - will work. The S3 method to be dispatched on will be found at run time (since the class of the object needs to be known).

Problem

However, if an S3 method is defined "locally" (e.g. in the global environment as part of prototyping/development), that S3 method will not be part of the set of globals that are exported to the job/task.

Workaround

The workaround is to:

  1. Check whether an identified global is a generic function or not.
  2. If it is a generic function, use methods(<generic>) to search for all matching S3 methods.
  3. For each S3 method, check if it is in a package name space or in another environment (e.g. global environment).
  4. If in a package namespace, add the package to set of required packages to be loaded (conservative approach).
  5. If in a non-package environment, then add the S3 method mth to the list of globals and run findGlobals(mth) on it to make sure it also gets everything it needs.

There should also be a way to export options(), par() etc.

Problem

There should also be a way to export options(), e.g.

options(width=60)
a %<=% {   getOption("width") 
 }

will return the (default) 80, and not the set 60, because options are currently not exported. Same with par() options etc.

Workaround

Currently one needs to explicitly set the options as in.

options(width=60)
a %<=% {
  options(width=60)
  getOption("width") 
}

Action

This is a trick one, because it could be that we do not want to export all possible options, e.g. BatchJobs options(?!?).

status() on a AsyncTask launched via %<=% stalls

This works:

backend("multicore")
> a <- async({ Sys.sleep(60); 1 })
> status(a)
[1] "running"   "started"   "submitted"

but this doesn't:

> b %<=% { Sys.sleep(60); 2 }
> status(b) ## This stalls!
^C
> status(b)
^C
Warning message:
In status(b) : restarting interrupted promise evaluation
> inspect(b)
AsyncTask:
Expression:
  {
      Sys.sleep(60)
      2
  }
Status: 'running', 'started', 'submitted'
Backend:
Job registry:  async1840699759
  Number of jobs:  1
  Files dir: /home/henrik/projects/exome-copy-number/.async/async1840699759-files
  Work dir: /home/henrik/projects/exome-copy-number
  Multiple result files: FALSE
  Seed: 953806506
  Required packages: BatchJobs

MEMOIZATION: Use deterministic jobs names

For each async expression/jobs, generate a checksum based on the expression and its global objects. Use this checksum for the async jobs id. This way it should be possible to avoid launching the same asynchroneous job if already running/done/available.

Move asAssignTarget() to listenv

Move asAssignTarget() to listenv and make it a public function. Some more restructuring of that function is possible as will.

%plan% batchjobs(backend=...) does not undo the backend - should it?

Observation

> library(future)
> plan(eager)

> backend("local")
[1] "local"

> x %<=% { 1 } %plan% batchjobs(backend="interactive")
> x
[1] 1

> backend()
[1] "interactive"

Should it?

Troubleshooting

This is because batchjobs() does not undo the backend set, e.g.

> backend()
[1] "local"
> f <- batchjobs({ 1 }, backend="interactive")
> backend()
[1] "interactive"
> value(f)
[1] 1
> backend()
[1] "interactive"

This is in turn because BatchJobsAsyncTask({ 1 }, backend="interactive") does not undo it. Continuing, the problem occurs because of tempRegistry():

> backend("local")
[1] "local"
> reg <- async:::tempRegistry(backend="interactive")
Creating dir: /cbc/henrik/repositories/GitHub/async/.async/async28374250-files
Loading required package: BatchJobs
Loading required package: BBmisc
Saving registry: /cbc/henrik/repositories/GitHub/async/.async/async28374250-files/registry.RData
> backend()
[1] "interactive"

Add AsyncError class for richer exception handling and error messages

Add a AsyncError class (extending the error class) such that it's possible to register also the AsyncTask objects, BatchJob registry objects and so on.

Currently we just report simple errors, e.g.

msg <- sprintf("BatchJobError: %s", error(obj))
stop(msg, call.=FALSE)

msg <- sprintf("BatchJobExpiration: Job of registry '%s' expired: %s", reg$id, reg$file.dir)
stop(msg, call.=FALSE)

msg <- sprintf("AsyncNotReadyError: Polled for results %d times every %g seconds, but asynchroneous evaluation is still running: BatchJobs registry '%s' (%s)", tries-1L, interval, reg$id, reg$file.dir)
stop(msg, call.=FALSE)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.