Code Monkey home page Code Monkey logo

axopy's Introduction

AxoPy Logo


JOSS Paper GitHub Actions CI testing status Documentation Status Codecov test coverage PyPI package Anaconda package

Documentation: https://axopy.readthedocs.io

Axo-Pythonic synapses are those in which an axon synapses upon a Python program. AxoPy aims to facilitate such connections between electrophysiolgical signals and machines by making it easy for researchers to develop human-computer interface experiments. If you've ever found yourself spending more time thinking about how to implement your experiments than thinking about what the experiment should be, AxoPy may be able to help.

AxoPy consists of:

Graphical interface
Central to AxoPy is the graphical user interface providing visual feedback to the subject and controlling the flow of the experiment. The GUI is backed by PyQt5, and you're free to implement customized graphical elements if those built in to AxoPy don't suit your needs.
Data acquisition
AxoPy establishes a fairly simple API for communicating with input hardware, so all that's usually needed is a bit of middleware to get going. Check out pytrigno or pymcc to see what this is like. A couple input devices are built in (keyboard, noise generator), so examples run without needing special hardware.
Data storage
Data is stored in a file structure with common file formats (CSV and HDF5) so you can a) start working with data as soon as an experiment session is over and b) you don't need anything but standard tools (pandas, h5py) to do so. A high-level interface to the storage structure is also provided to make traversing a dataset simple.
Pipeline processing
Estimating intentions of the user from raw electrophysiological signals often involves a large number of processing operations. AxoPy facilitates flexible construction of pipelines that can be reused in different parts of an experiment and re-used for offline post-processing, etc.

Quickstart

Installation

pip

AxoPy is available on PyPI, so the following should get it installed if you're using a standard Python installation with pip:

$ pip install axopy

Note: if you have Python < 3.5, pip will not be able to install the pyqt5 package for you because wheels for pyqt5 are only provided for Python >= 3.5. If you are stuck on an older version of Python, consider using conda (described below, works for any Python version) or installing Qt5 and PyQt5 yourself before running the command above.

See the development documentation for information on setting up a development environment to work on AxoPy itself.

conda

AxoPy is also available on conda-forge, so if you're using (Ana)conda with any Python version, you can install it with:

$ conda install -c conda-forge axopy

Hello, AxoPy

Here's a minimal example to display some randomly generated signals in an "oscilloscope":

import axopy

daq = axopy.daq.NoiseGenerator(rate=1000, num_channels=4, read_size=100)
exp = axopy.experiment.Experiment(daq=daq)
exp.run(axopy.task.Oscilloscope())

Next Steps

Check out the documentation for more information on creating experiments. Some examples are also available.

Citing

If you use AxoPy in your research and want to acknowledge us, see our instructions for citing AxoPy.

Contributing

Please feel free to share any thoughts or opinions about the design and implementation of this software by opening an issue on GitHub. Constructive feedback is welcomed and appreciated.

GitHub issues also serve as the support channel, at least for now. Questions about how to do something are usually great opportunities to improve documentation, so you may be asked about your thoughts on where the answers should go.

If you want to contribute code, open a pull request. Bug fix pull requests are always welcome. For feature additions, breaking changes, etc. check if there is an open issue discussing the change and reference it in the pull request. If there isn't one, it is recommended to open one with your rationale for the change before spending significant time preparing the pull request.

Ideally, new/changed functionality should come with tests and documentation. If you are new to contributing, it is perfectly fine to open a work-in-progress pull request and have it iteratively reviewed. See the development documentation for instructions on setting up a development environment, running tests, and building the documentation.

axopy's People

Contributors

agamemnonc avatar ixjlyons avatar sixpearls avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

axopy's Issues

Issue with QApplication

I am using some DAQ-specific software which communicates with the hardware using Qt.

This results in an issue the first time that get_qtapp() is called in the following line

app = get_qtapp()

because a QApplication already exists and therefore a new one is not created.

The program crashes with the following error:

QWidget: Cannot create a QWidget without QApplication

(Apologies I can't provide code to reproduce the error without using the hardware I am using).

The trivial way to address this problem is to call get_qtapp() before the device instance is constructed, so that it is axopy that creates the QApplication. But I was wondering if there might be a more elegant way of handling this case?

@ixjlyons Please feel free to close if you think this is not worth the effort.

predict_proba option for Estimator

Similar to #61, it might be useful to have a return_proba kwarg in Estimator for cases where the user needs the prediction probabilities rather than predicted class.

@ixjlyons do you think this would be useful at all?

If so, I am happy to submit a PR.

Think more carefully about keyboard key identifiers

There needs to be a straightforward way to specify keys (for the keyboard input device, custom tasks with key handling, etc.). Right now strings work ('w' for the w key) but this is limited and probably doesn't work well with other keyboard layouts. Obviously using Qt key IDs would work but I'd really rather not have to import Qt anywhere in user code.

Idea: axopy-quickstart

A command like axopy-quickstart would be cool for generating an experiment project skeleton. Could do an interview to get options like sphinx-quickstart or pelican-quickstart or just take a few command line args like cargo new. The goal would be to end up with something you can run with no modifications at all.

  • generate a setup.py with an entry point for running the experiment
  • package directory with __init__.py, __main__.py, version, etc.
  • README
  • requirements.txt or environment.yaml

possibly:

  • git init
  • a config/settings system (if I come up with one in practice)

Backend system

I'm currently thinking there are 2 types of backends that will need to be specified for an experiment:

  • Hardware device I/O (e.g. thread, QThread, IPC with separate process) and storage (e.g. HDF5, MySQL, file system hierarchy) -- these are both persistent streams
  • GUI (e.g. PyQt, pyglet)

Hardware and storage can be explicitly specified by instantiating the actual classes to use for these persistent streams. The UI backend can be specified at the experiment level. Tasks themselves can be built agnostic to any backend, and they'd ideally work on any of them.

An experiment might look something like:

class CustomTask(BaseTask, InputTaskMixin):
    """Custom task implementation, not caring about backend."""
    ...

if __name__ == '__main__':
    experiment = Experiment(input_device=QThreadDevice(...), storage=HDF5Storage(...),
                            tasks=[CustomTask()], backend='qt')
    experiment.run()

The Experiment would then set up each task with the correct backend.

In a Qt-backed experiment, the way tasks are started could be by pressing a "start" button. In a pyglet-backed version, a key-press could start the tasks. The key could be specified through a backend_kwargs dict in Experiment, for example.

High-level design

Just outlining some of the concepts for thinking about what AxoPy provides.

The concept so far is that there are a couple nested pipelines. In the inner layer, there are tasks which implement the workflow of a set of experimental trials. The task is set up as an event-driven network of signals and slots (in Qt speak) so that the API can essentially consist of connecting up basic elements.

Surrounding these task pipelines is a single experiment pipeline, which contains multiple tasks and ensures they have access to global data like the subject information, data storage, and I/O devices.

Some remaining questions:

  • Does it make sense to make input/output devices persistent, or should these be created and destroyed at the task level? I can imagine an experiment with a task requiring a device which other tasks don't use, but tasks can just selectively activate the devices required. It could be wasteful to disconnect/reconnect to some kinds of devices.
  • Should the task system be completely event-driven? I think in some cases it might be cumbersome to wire up every possible event in a task as a signal/slot. For example, a task could have a cleanup method that the user can fill in with code instead of forcing every operation to conform to the event-driven style.
  • Can tasks be installed into the experiment in a way that allows them to be run more than once? It's pretty common to run a task several times, perhaps with a slightly different configuration. In some cases, it might make sense to create them as separate tasks, but in most cases I can think of, it's easier to allow for restarting a task. It may work to simply specify that a task repeats some block of trials and waits for a button press to start the next block. I'd rather not force that approach, though.
  • It would be cool to keep the task/experiment API decoupled from a specific GUI toolkit. Does it make sense to support this as transparently as, say, matplotlib? That is, the user can specify that they want to use a specific backend but their code doesn't change. Another somewhat less complicated example I'm familiar with is pyusb.

Some tasks need to know too much about their processing pipeline

For example, a pipeline contains a scikit-learn StandardScaler. The task making use of the pipeline needs to use data from a previous task to fit the StandardScaler. In order to do this, it needs to know which task has the feature data, the name of the block that needs to be fit, etc.

Basically any time a pipeline has blocks that need special attention (any time a task needs to look at pipeline.named_blocks), there is a bad coupling that will make it messy to implement generic task classes/mixins.

Something like this is a workaround that could probably be made much simpler with some support from AxoPy:

pipeline = copper.Pipeline(...)
task = SomeTask(pipeline)
exp = Experiment([task])

def on_finish():
    reader = exp.storage.require_task('SomeTask')
    pipeline.named_blocks['Transformer'].fit(reader.array('data'))

task.finish.connect(on_finish)

exp.run()

Notice the ordering here is strict and ugly. Maybe having hooks in Experiment on task start/finish/whatever could work.

Simpler specification of array hstack/vstack

Currently have to do something like:

writer.arrays['some_array'].orientation = 'vertical'
writer.arrays['some_array'].stack(...)

Would probably be better to just wrap hstack and vstack like:

writer.arrays['some_array'].vstack(...)

Add a changelog

Should look into scripts people have written to generate changelog entries from git history, but most importantly API changes need to be kept track of.

Travis is broken again

Looks similar to #28. I've seen some traffic on the PyQt mailing list talking about 5.12, so maybe that version needs to be guarded against at least for CI.

Application doesn't exit nicely when running in an interactive shell

Running the following example line-by-line from the README in an interactive shell results in the experiment window remaining open despite control going back to the shell:

import axopy

daq = axopy.daq.NoiseGenerator(rate=1000, num_channels=4, read_size=100)
exp = axopy.experiment.Experiment(daq=daq)
exp.run(axopy.task.Oscilloscope())

The window stays open and can't be closed (at least on my machine) without killing the python process that called exp.run(...).

I believe this is fixable and it's probably worthwhile to look into it because users of Spyder or new users running the example in an interactive session would have this problem.

Packaging

Need to decide on a few things. Right now the dependencies are:

  • h5py
  • numpy
  • pandas
  • pyqt5
  • pyqtgraph

All of these except pyqtgraph appear to have wheels for all platforms, and pyqtgraph has an sdist package that should work on all platforms. The main issue is PyQt5 doesn't have wheels for Python 2. I think it's fine if AxoPy is Python 3 only, but it's sort of a shame that it's only because of a dependency.

Alternatively (or also), it might be nice to have a package on conda-forge. I need to have a look at what's involved there. Ideally the process of making a release (to both PyPI and conda-forge) isn't too bad.

Second of two duplicate tasks doesn't run properly

I have two tasks that are identical (except for some parameters). Running the second task after the first fails, yet running the second task alone is fine. I suspect there's something strange going on with messaging.

class TaskImpl(Task):

    def __init__(self, arg):
        self.arg = arg

    def prepare_input_stream(self, input_stream):
        self.input_stream = input_stream
        self.input_stream.updated.connect(self.update)

    def run_trial(self):
        self.input_stream.start()

    def update(self, data):
        # task1 runs fine, but task2 doesn't get here on second trial

task1 = TaskImpl(1)
task2 = TaskImpl(2)

Experiment([task1, task2], ...).run()

More flexible TaskWriter interface

So the top-level Storage class gives a task implementation easy access to creating a data storage hierarchy through TaskWriter, but classes in the layer beneath TaskWriter (i.e. ArrayWriter and TrialWriter) are constructed through TaskWriter, which is in turn constructed through Storage. Instead of using a chain of *args, **kwargs, maybe the bottom layer classes can be a bit more "buildable." For example, with ArrayWriter, you can set the stacking dimension, set some attributes (like channel names), etc. With TrialWriter, you can specify columns after construction.

storage = Storage('/path/to/data')
task_storage = storage.create_task('task1')
task_storage.trials.columns = ['block', 'trial', 'attr']
task_storage.create_array('array1', orientation='horizontal',
                          attrs={'channels': ['ch1', 'ch2']})

inverse option for block Transformer

There are cases that a sklearn Transformer needs to be used for inverse transforms (e.g. classifier returns prediction in label encoded format, but we want to use the original encoding, such as categorical value. )

I suggest including an inverse input argument in block Transformer to support this feature.

Happy to submit a PR.

Improve Canvas API

Current annoyances:

  • there's no text object, so making Qt calls is needed
  • the coordinate system isn't ideal -- at minimum it needs to be possible to do everything in normalized coordinates and not have to multiply by 100 everywhere
  • add simple interface for flipping the x coordinate just for drawing (for left-handed subjects) -- maybe just a class attribute flipped that can be set globally
  • checking for intersection between objects (e.g. cursor and target) requires usage of Qt API

Second pass at documentation

Here's a pass through the current state of the documentation to identify what needs to be replaced/removed/added.

Index Page

I just recently moved most of this stuff to the README. This page should probably just be links to where to find things (e.g. a welcome message and a TOC of the docs)

Installation

Split into user installation and dev installation. Move current contents to dev installation and remove tox instructions since I'm not using it any more.

Tutorials

My original plan was to have a few tutorials walking through creation of entire experiments, but I might leave this to examples instead. Tutorials should probably be specific topics that might be too verbose to cover in the user guide. Some candidate tutorial/example ideas:

  • implementing custom Qt UI elements
  • implementing a custom DAQ?

User Guide

I think I'm still ok with the idea of having a page for each major component of AxoPy (UI, data storage, input streaming, pipeline), but there needs to be a page that describes things at a high level.

  • user interface page is totally outdated and needs to be pretty much rewritten; it should actually be a page on tasks
  • data acquisition page is decent though stuff specific to MCC DAQ and Trigno DAQ need to be removed
  • storage is in pretty good shape
  • messaging page content could be moved to task page
  • pipeline is ok but needs to be cleaned up a little bit

API Docs

Things seem decent here.


Checklist:

  • clean up index.rst
  • add user installation instructions to installation docs
  • remove tutorials section or replace it with example scripts
  • add an introductory page to user guide
  • make an example with custom Qt UI
  • rewrite GUI page in user guide and re-focus on tasks instead
  • clean up data acquisition page in user guide
  • add messaging docs to task page
  • clean up pipeline page a little

Idea for experiment design

Need

  • different subjects do tasks with slightly different conditions
  • different subjects may do different tasks altogether (different order maybe)
  • ability to have a single subject advance to a new set of tasks (return visit)

Idea

  • tasks take every configurable parameter in __init__
  • experiment author doesn't instantiate tasks directly but specifies the class and the set of parameters instead
  • participant selection also includes setting a "condition" that comes from the experiment design specification
  • use the concept of an experiment "design" that specifies a set of experiment conditions, and for each condition, a list of (task_name, task_class, task_params) tuples
  • subject pools (group a, group b) and return visits should both be possible under this scheme
class SomeTask(Task):
    def __init__(self, param1=default, param2=default...):
        ...

class OtherTask(Task):
    def __init__(self, param1=default1, param2=default2...):
        ...

# one mechanism for the user to set some parameters (like colors) "globally"
task_defaults = {'bg_color': '#444444}
other_task_params = {'param1': nondefault, **task_defaults}

experiment_design = {
    'condition_a': [
        ('task1', SomeTask, {'param1': val_a, **task_defaults}),
        ('task2', SomeTask, {'param2': val, **task_defaults}),
        ('task3', OtherTask, other_task_params)
    ],
    'condition_b': [
        ('task1', SomeTask, {'param1': val_b, **task_defaults}),
        ('task3', OtherTask, other_task_params)
    ]
}

Notes

  • could probably be fully JSON with strings for class names
  • in most cases, this is pretty verbose -- perhaps allow for a simpler alternative for experiment designs with only one condition
  • need to think more about how data storage will handle/utilize this information -- specifically data dependencies between tasks

Better examples

I have a couple ideas for better examples.

  • Make more examples -- obvious
  • Heavily comment examples: AxoPy isn't really designed in a way where complex examples will necessarily flow logically top to bottom, but this could be a nice way to explain some of the tips and tricks. It'd be cool to render them sort of like a jupyter notebook with prose and code blocks interwoven. zetcode (http://zetcode.com/gui/pyqt5/) is an example that I think is pretty effective. I'd rather not manually include blocks of the example code with line numbers.
  • Run the examples like tests. I've recently gotten involved in pyqtgraph and I didn't realize they run all the examples with pytest. It's a little complex to set up and there will probably need to be a few improvements to how AxoPy uses the Qt event loop, but it'd be fantastic to automatically run the examples to at least there aren't silly syntax errors and such.

Dataset streamer

Just an idea :)

I think it would be nice to give the user the opportunity to use the Pipeline infrastructure of axopy to process publicly available datasets (e.g. EMG/EEG etc.) for offline analyses, or perhaps 'replay' datasets recorded with axopy.

To implement that, I think the only requirement would be a Dataset DAQ implementation whose read() method would work much like an iterator. The user could then sub-class this to create custom classes appropriate for the dataset at hand. The actual data should be fed into the dataset object either upon construction or a later stage. Optionally, there could be a real_time simulation parameter controlling whether a Sleeper.sleep() is called after each read() operation to emulate real-time data recording.

If this would be of interest, I am happy to submit a PR.

Include how to cite

Now that the JOSS paper is done, we can add some info to the README and docs about usage and citing and whatnot.

  • badge in README
  • how-to-cite with BibTeX entry (copied between README and docs/index)
  • look into __bibtex__ and add that if it seems to do something useful

Add `itersubjects` to `Storage` class

It's pretty common to iterate over subjects in post-experiment analysis and that's kind of clunky at the moment:

storage = Storage('data')
for subj in storage.subject_ids:
    storage.subject_id = subj
    reader = storage.require_task('sometask')
    ...

Better:

storage = Storage('data')
for subj in storage.itersubjects():
    reader = storage.require_task('sometask')
    ...

It'd also be pretty rad if you could optionally pass a task to get the reader object instead:

storage = Storage('data')
for reader in storage.itersubjects(task='sometask'):
   ...

Should be pretty straightforward to implement.

Idea: auto print a row of trials.csv at a time

I've found it useful to print out a bit of status information with each trial/block of an experiment, which would probably be distracting for a subject. A neat integration of tasks and storage could be easily setting a task up to log (and/or easily output to stdout) each row of the trials.csv file as it's written. Could probably be configurable experiment-wide, becoming the job of the Experiment class (which makes sense, since it handles both storage and tasks).

pipeline.clear() throwing error

I have a custom pipeline instance that is probably not put together correctly, would you mind helping?

\axopy\pipeline\core.py", line 130, in _call_block
    out = f(data)
TypeError: clear() takes 1 positional argument but 2 were given

My pipeline:

main_pipeline = pipeline.Pipeline([
    pipeline.Windower(1000),
    pipeline.Passthrough(
    [(lowfilter, highfilter), FFT(), pipeline.Callable(integrated_emg)])])

My block:

class FFT(pipeline.Block):
    """Performs Numpy's Fast Fourier Transform on the windowed signal and
    returns the positive frequencies and associated rectified powers.
    ----------
    Returns: tuple, where k is number of input signals, n is number of fft pts:
        (freq, shape (n/2,), powers, shape (n/2,k))
    TODO: not sure how many fft pts is appropriate
    """

    def process(self, data):
        n = 1000  # number of fft pts
        chan = np.shape(data)[0]
        F = np.fft.fft(data, n=n)
        F = F.reshape((chan, n))
        freq = np.fft.fftfreq(n, d=1 / 2000)
        power = np.square(F.real) / n
        posfreq = freq[:int(n / 2)]
        pospower = power[:, :int(n / 2)]
        return pospower

Function generator/streamer

A daq implementation that generates data according to some function/callable specified by the user (e.g. sine wave). The streamer doesn't necessarily need to output data (e.g. it could be used only to determine main program update rate).

Generator name to be agreed.

Support for multiple daqstreams

In my understanding, it is not currently possible to support multiple daqs running simultaneously (e.g. EEG with EMG, or EMG with data glove etc.).

I have thought of a trivial solution but I think it would only work if the different sampling rates are perfect multiples of one another, so that the samples_per_read parameters are such that it takes exactly the same time to perform a read() operation from the devices.

@ixjlyons Please let me know if this is something worth considering and, if so, I could open a PR and take it from there.

Get GUI tests running on Travis

I don't care all that much about GUI tests themselves, but they do serve some purpose as they test that the installation process for PyQt5 works end to end (which is one of the few dependencies that's not necessarily trivial to get installed and running). Currently, running the tests in tests/test_gui.py on Travis results in a segfault. I've been playing around with docker containers and have currently narrowed it down to the following image to get the tests to run without errors:

FROM ubuntu:16.04

RUN apt-get update && \
    apt-get install -y python3 python3-pip python3-setuptools xvfb xorg libglib2.0

ADD . /root/src/axopy

RUN cd /root/src/axopy && \
    pip3 install -r requirements.txt && \
    pip3 install -r requirements-dev.txt && \
    pip3 install -e .

Note that I actually installed xorg interactively in the container and I couldn't figure out how to install it without prompting for a language and keyboard layout (probably just have to set that up manually before installing).

So starting with a minimal Ubuntu install, you just need Python itself installed (could instead be done with conda) and a few packages that you are very likely to already have if you're using it as a desktop system (except maybe xvfb). Unfortunately they're pretty huge packages (pull in quite a few dependencies) so I'd prefer to figure out what's required more minimally.

Qt signals only work with classes deriving from QObject

Simple example:

from PyQt5.QtCore import pyqtSignal, QObject

# replace `object` with `QObject` to fix
class Signaller(object):
    signal = pyqtSignal(str)

    def trigger(self, msg):
        print("emitting: {}".format(msg))
        self.signal.emit(msg)


def receiver(msg):
    print("received: {}".format(msg))


signaller = Signaller()
signaller.signal.connect(receiver)
signaller.trigger('hello')

Traceback:

Traceback (most recent call last):
  File "try.py", line 17, in <module>
    signaler.signal.connect(receiver)
TypeError: Signaler cannot be converted to PyQt5.QtCore.QObject in this context

In this case Signaller would be an implementation of a block, for example, which would derive from a Block base class. Behind the scenes, we'd need this base class to inherit from QObject depending on the backend messaging system. A proof-of-concept solution can be seen here.

Although the solution seems to work, I'm a little concerned with playing around this deep in the mud when I don't fully grok PyQt's internals.

Warn if trial design array is not used

h5py raises an exception if an Array is created, never stacked on or initialized, and the Trial is written. Should probably catch and reraise with a more direct message. Also add a note to the writer docs and user guide.

Unify timing interfaces

IncrementalTimer and Timer have different interfaces (one takes milliseconds and the other takes counts). Maybe IncrementalTimer should take milliseconds and an update rate.

Improve storage interface

In implementing complex experiments, storage needs for a task can be tricky and currently the names of arrays and trial attributes are spread out around the task implementation. It can even be worse when a subsequent task needs to read in this data -- now you have several different classes needing to agree on a storage layout.

Keyboard input device

The EmulatedDaq for easily demonstrating the oscilloscope task is cool, but as soon as you want to start debugging your experiment, it's pretty much useless. A keyboard device is really needed to be able to perform experiment tasks to test them while developing (without having to set up the ephys hardware).

I suspect that keyboard input handling will have to be done with PyQt, so I'm hoping it can be done in a somewhat clean way. I doubt it's feasible to make a KeyboardDaq and use it through InputStream. It may have to be inherently coupled to Experiment.

Built-in processor task

It would be good to have the ability to just write a function that takes in configured storage, processes some input data, and writes out some new data. In implementing experiments, I've been writing these as single-trial "processor tasks." It would be cool to instead support plain functions as tasks in an experiment or at least a wrapper so you can do something like this:

def process_func(storage):
    reader = storage.require_task('some_task')
    writer = storage.create_task('other_task')
    for trial in reader.trials:
        writer.write(do_something(trial))

exp = Experiment().run(SomeTask(), Processor(process_func))

This would make testing more convenient and would remove a bit of boilerplate. It would be good if the ProcessorTask wrapper would just give some kind of visual feedback (i.e. text on the screen that says "processing..."). To do this, it would need to run the processor function on a separate thread. Would also need to be sure keyboard events and such are ignored while running.

One other thing to think about: if you want to run the same processing code on tasks with different names, it would be good to have a simple way of setting the "input task" name.

Hidden functions, classes, and method

Some things probably don't need to be exposed right now. Need to make a pass through everything and hide what's most liable to change and/or what's not really necessary to use directly.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.