Code Monkey home page Code Monkey logo

autoencoder's Introduction

Alexander the Great cuts the Gordian Knot

Problems well-defined are problems solved.

The standard that our analytic work aspires to achieve, can best be illustrated through our namesake: Diogenes. A famous (or infamous) Greek philospher, Diogenes the Cynic is the paragon of the two virtues that best represent our standard: minimalism, and logical rigor ๐ŸŽฒ

autoencoder's People

Contributors

diogenesanalytics avatar

Watchers

 avatar  avatar

autoencoder's Issues

Sub Module: Hyperparameter Tuning

Feature: Implement Basic Minimal Autoencoder

Problem

Need to implement the most basic autoencoder (has only 1 hidden layer).

Code

This code is directly from the previous issue #4:

"""A simple autoencoder to get you started."""
from dataclasses import dataclass
from typing import Any
from typing import ClassVar
from typing import Dict
from typing import Optional

from keras import layers

from .base import BaseAutoencoder
from .base import BaseLayerParams
from .base import DefaultParams


@dataclass
class MinimalLayerParams(BaseLayerParams):
    """Layer parameters class for minimal autoencoder."""

    # setup default values
    default_parameters: ClassVar[DefaultParams] = {
        "l0": (layers.InputLayer, {"input_shape": (784,)}),
        "l1": (layers.Dense, {"units": 32, "activation": "relu"}),
        "l2": (layers.Dense, {"units": 784, "activation": "sigmoid"}),
    }

    # setup instance layer params
    l0: Optional[Dict[str, Any]] = None
    l1: Optional[Dict[str, Any]] = None
    l2: Optional[Dict[str, Any]] = None


class MinimalAutoencoder(BaseAutoencoder):
    """A simple autoencoder to get you started."""

    _default_config = MinimalLayerParams()

    def __init__(self, model_config: Optional[MinimalLayerParams] = None) -> None:
        """Overrided base constructor to set the layer params class used."""
        # call super
        super().__init__(model_config=model_config)

References

Feature: Implement Default Parameters Requirements

Problem

Currently the only "requirements" of the the BaseLayerParams.default_parameters property is that it is of type DefaultParams:

# custom types
DefaultParams: TypeAlias = Dict[str, Tuple[Layer, Dict[str, Any]]]
class BaseLayerParams(ABC):
"""Autoencoder layers hyperparameters configuration base class."""
def __post_init__(self) -> None:
"""Store updated params and get sequence index slices."""
# get updated parameters for instance
self._instance_parameters = self._build_instance_params()
def _filter_layer_attrs(self) -> Generator[Tuple[str, Dict[str, Any]], None, None]:
"""Filter out layer attributes from class instance."""
# get all attributes and values in class instance namespace
for attr, value in self.__dict__.items():
# make sure attribute name is in default parameters
if attr in self.default_parameters.keys():
# generate tuple pairs
yield attr, value
def _update_layer_params(
self,
) -> Generator[Tuple[Layer, Dict[str, Any]], None, None]:
"""Update default layer parameters values."""
# get layer instance attrs and their values
for attr, value in self._filter_layer_attrs():
# unpack default parameters
layer, params = self.default_parameters[attr]
# check if none
if value is not None:
# merge instance onto default
params |= value
# generate
yield layer, params
def _build_instance_params(self) -> Tuple[Tuple[Layer, Dict[str, Any]], ...]:
"""Create mutable sequence of layer params for instance."""
return tuple(self._update_layer_params())
@property
@abstractmethod
def default_parameters(self) -> DefaultParams:
"""Defines the required default layer parameters attribute."""
pass

But there will likely be a need for even more intense requirements of this data structure (i.e. in order to determine which layers are input, encoding, decoding, see issue #17 for a this specific feature).

References

References: Autoencoders

References

Design: Initial Class Strategy

Problem

Need to devise the initial approach to designing the class hierarchy for autoencoder.

Solution

Tentative pseudo code for what the class hierarchy might look like (including any methods/attributes):

"""Autoencoder base class."""
from abc import ABC
from abc import abstractmethod
from dataclasses import dataclass
from math import ceil
from math import floor
from typing import Any
from typing import Dict
from typing import Generator
from typing import Optional
from typing import Tuple
from typing import TypeAlias

import keras
from keras.layers import Layer


# custom types
DefaultParams: TypeAlias = Dict[str, Tuple[Layer, Dict[str, Any]]]


class BaseLayerParams(ABC):
    """Autoencoder layers hyperparameters configuration base class."""

    def __post_init__(self) -> None:
        """Store updated params and get sequence index slices."""
        # get updated parameters for instance
        self._instance_parameters = self._build_instance_params()

    def _get_layer_masks(self) -> Tuple[slice, slice]:
        """Calculate slice masks for selecting encode/decode subrange."""
        # measure halfway point
        halfway = len(self._instance_parameters) / 2

        # calculate upper and lower
        upper, lower = ceil(halfway), floor(halfway)

        # get encode/decode masks
        encode, decode = slice(0, upper), slice(lower)

        # return masks
        return encode, decode

    def _get_encode_layers(self) -> Tuple[Tuple[Layer, Dict[str, Any]], ...]:
        """Get encoding layers subsequence."""
        # get encode mask
        encode, _ = self._get_layer_masks()

        # return subsequence
        return self._instance_parameters[encode]

    def _get_decode_layers(self) -> Tuple[Tuple[Layer, Dict[str, Any]], ...]:
        """Get decoding layer subsequence."""
        # get encode mask
        _, decode = self._get_layer_masks()

        # return subsequence
        return self._instance_parameters[decode]

    def _filter_layer_attrs(self) -> Generator[Tuple[str, Dict[str, Any]], None, None]:
        """Filter out layer attributes from class instance."""
        # get all attributes and values in class instance namespace
        for attr, value in self.__dict__.items():
            # make sure attribute name is in default parameters
            if attr in self.default_parameters.keys():
                # generate tuple pairs
                yield attr, value

    def _update_layer_params(
        self,
    ) -> Generator[Tuple[Layer, Dict[str, Any]], None, None]:
        """Update default layer parameters values."""
        # get layer instance attrs and their values
        for attr, value in self._filter_layer_attrs():
            # unpack default parameters
            layer, params = self.default_parameters[attr]

            # check if none
            if value is not None:
                # merge instance onto default
                params |= value

            # generate
            yield layer, params

    def _build_instance_params(self) -> Tuple[Tuple[Layer, Dict[str, Any]], ...]:
        """Create mutable sequence of layer params for instance."""
        return tuple(self._update_layer_params())

    @property
    @abstractmethod
    def default_parameters(self) -> DefaultParams:
        """Defines the required default layer parameters attribute."""
        pass


@dataclass
class BaseAutoencoder(ABC):
    """Autoencoder base class."""

    model_config: Optional[BaseLayerParams] = None

    def __post_init__(self) -> None:
        """Setup autoencoder model."""
        # check if default config used
        if self.model_config is None:
            # get default
            self.model_config = self._default_config

        # build model ...
        self.model = self._build_model()

    @property
    @abstractmethod
    def _default_config(self) -> BaseLayerParams:
        """Defines the default layer parameters attribute."""
        pass

    def _build_encoding_layer(self) -> keras.Model:
        """Assemble encoder from subsequence of encoding layers."""
        # get instance parameters for encoding layers
        assert self.model_config is not None
        inst_params = self.model_config._get_encode_layers()

        # generate layers from parameters
        encoding_layers = [layer(**params) for layer, params in inst_params]

        # create encoding model
        return keras.Sequential(encoding_layers)

    def _build_decoding_layer(self) -> keras.Model:
        """Assemble decoder from subsequence of decoding layers."""
        # get instance parameters for encoding layers
        assert self.model_config is not None
        inst_params = self.model_config._get_decode_layers()

        # generate layers from parameters
        decoding_layers = [layer(**params) for layer, params in inst_params]

        # create encoding model
        return keras.Sequential(decoding_layers)

    def _build_model(self) -> keras.Model:
        """Assemple autoencoder from encoder/decoder submodels."""
        # create encoding layer
        self._encode_layer = self._build_encoding_layer()

        # create decoding layer
        self._decode_layer = self._build_decoding_layer()

        # build model ...
        return keras.Sequential([self._encode_layer, self._decode_layer])

    def compile(self, **kwargs: Any) -> None:
        """Wrapper for Keras model.compile method."""
        self.model.compile(**kwargs)

    def fit(self, **kwargs: Any) -> None:
        """Wrapper for the Keras model.fit method."""
        self.model.fit(**kwargs)

    def predict(self, **kwargs: Any) -> None:
        """Wrapper for the Keras model.predict method."""
        self.model.predict(**kwargs)

    #
    # def encode(self):
    #     # encode data using trained encoding layer
    #     pass
    #
    # def decode(self):
    #     # decode data using trained decoding layer
    #     pass
    #
    # def visualize(self):
    #     # run custom visualization code
    #     pass
    #
    # def save(self, model_path: Union[str, Path]):
    #     pass
    #
    # def load(self, model_path: Union[str, Path]):
    #     pass

References

Feature: Load/Save Reconstruction Error/Threshold Information

Problem

Sometimes you may want to save/load autoencoder evaluation data from previous instances. For example maybe you want to use the threshold calculated from the train/val dataset as the threshold on a completely new test dataset. This would allow you to see where the new test dataset reconstruction error distribution would tend to appear relative to the train/val dataset threshold (i.e. how "anomalous" this new test dataset actually is.

Solution

Possible API for this:

  • load_threshold
  • save_threshold
  • load_errors
  • save_errors

Feature: Scale Default Parameters

Problem

Currently there is no way to "scale" the default dimensions of the atuoencoder layers.

Solution

Either a constructor argument like input_scale or just a static method that can be used to create default params (with scaled input).

Feature: V6616 Pre-Trained ImageNet Weights Autoencoder

Problem

Building autoencoders for images involves constructing a neural network architecture that effectively (hopefully) extracts the latent features of the image into a "compressed" smaller dimension (the encoded layer). This also necessitates training the model to "extract" features ... which is a problem that has been solved in other models (e.g. VGG16). Thus one is potentially "reinventing the wheel" so to speak.

Code

Example using VGG16:

# get ae lib
import math
from keras.applications import VGG16
from keras.models import Model
from keras.layers import Dense, Flatten, Reshape
from keras.layers import Conv2D, UpSampling2D

# calculate dims
RESHAPE_DIM = math.prod(RESIZE_IMG_DIMS)
ENCODE_DIM = int(RESHAPE_DIM / 24.5)

# Load VGG16 model without top layers
base_model = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3))

# Freeze the layers of the VGG16 model
for layer in base_model.layers:
    layer.trainable = False

# Encoder layers
encoder_output = Flatten()(base_model.output)
encoder_output = Dense(512, activation='relu')(encoder_output)
encoder_output = Dense(256, activation='relu')(encoder_output)

# Decoder layers
decoder_output = Dense(512, activation='relu')(encoder_output)
decoder_output = Dense(25088, activation='relu')(decoder_output)
decoder_output = Reshape((7, 7, 512))(decoder_output)

# Reverse VGG16 layers
reverse_vgg16 = Conv2D(512, (3, 3), activation='relu', padding='same')(decoder_output)
reverse_vgg16 = Conv2D(512, (3, 3), activation='relu', padding='same')(reverse_vgg16)
reverse_vgg16 = UpSampling2D((2, 2))(reverse_vgg16)

reverse_vgg16 = Conv2D(512, (3, 3), activation='relu', padding='same')(reverse_vgg16)
reverse_vgg16 = Conv2D(512, (3, 3), activation='relu', padding='same')(reverse_vgg16)
reverse_vgg16 = UpSampling2D((2, 2))(reverse_vgg16)

reverse_vgg16 = Conv2D(256, (3, 3), activation='relu', padding='same')(reverse_vgg16)
reverse_vgg16 = Conv2D(256, (3, 3), activation='relu', padding='same')(reverse_vgg16)
reverse_vgg16 = UpSampling2D((2, 2))(reverse_vgg16)

reverse_vgg16 = Conv2D(128, (3, 3), activation='relu', padding='same')(reverse_vgg16)
reverse_vgg16 = Conv2D(128, (3, 3), activation='relu', padding='same')(reverse_vgg16)
reverse_vgg16 = UpSampling2D((2, 2))(reverse_vgg16)

# Additional Upsampling layers
reverse_vgg16 = UpSampling2D((2, 2))(reverse_vgg16)

# Output layer
output_layer = Conv2D(3, (3, 3), activation='sigmoid', padding='same')(reverse_vgg16)  

# Create the reverse VGG16 model
autoencoder = Model(inputs=base_model.input, outputs=output_layer)

# check network topology
autoencoder.summary()

Solution

Can actually build an autoencoder where the initial input is passed through a model (like VGG16) and the "output layer" of the initial model (which is the extracted features) is passed through a "bottleneck" (i.e. the autoencoder). It is at this point that the actual "compression" occurs. The final step is to "decompress" and to "reversevgg16" (i.e. apply the inverse transformation provided by the VGG16 model) until the original input image is reconstructed.

References

Refactor: Custom Loss Function Decorators

Problem

Some patterns are appearing with the custom loss function decorators/closures that could be refactored into an abstract base class

Code

def build_anomaly_loss_function(
anomalous_data: tf.Tensor,
model: Union[keras.Model, BaseAutoencoder],
axis: Tuple[int, ...] = (1, 2, 3),
) -> Callable[[tf.Tensor, tf.Tensor], tf.Tensor]:
"""Closure that sets up the custom anomaly detection loss function."""
# check model type
if isinstance(model, BaseAutoencoder):
# get keras.Model object
model = model.model
# create function
def anomaly_diff(y_true: tf.Tensor, y_pred: tf.Tensor) -> tf.Tensor:
"""Calculates mean training/anomalous data reconstruction error difference."""
# calculate the dynamic mean reconstruction error on training data
train_reconstruction_errors = tf.reduce_mean(
tf.square(y_true - y_pred), axis=axis
)
dynamic_threshold = tf.reduce_mean(train_reconstruction_errors)
# calculate the mean reconstruction error on anomalous data
anomalous_data_pred = model(anomalous_data, training=False)
anomalous_data_errors = tf.reduce_mean(
tf.square(anomalous_data - anomalous_data_pred), axis=axis
)
anomalous_data_mean = tf.reduce_mean(anomalous_data_errors)
# calculate the difference
return dynamic_threshold - anomalous_data_mean
# optimize with tf.function
optimized_func: Callable[[tf.Tensor, tf.Tensor], tf.Tensor] = tf.function(
anomaly_diff
)
# get wrapped function
return optimized_func
@dataclass
class build_encode_dim_loss_function: # noqa
"""Decorator factory class to build a penalized loss function."""
encode_dim: int
regularization_factor: float = 0.001
axis: Tuple[int, ...] = (1, 2, 3)
def __call__(
self,
penalized_loss: Optional[Callable[[tf.Tensor, int, float], tf.Tensor]] = None,
) -> Callable[[tf.Tensor, tf.Tensor], tf.Tensor]:
"""Call decorator to build custom penalized loss function."""
return self.decorate(penalized_loss)
@staticmethod
def default_penalty(loss: tf.Tensor, encode: int, reg: float) -> tf.Tensor:
"""Calculate the default penalty for the encoding dimension."""
return loss + (loss * encode * reg)
def decorate(
self,
penalized_loss: Optional[Callable[[tf.Tensor, int, float], tf.Tensor]] = None,
) -> Callable[[tf.Tensor, tf.Tensor], tf.Tensor]:
"""Decorator that builds the complete penalized loss function."""
# check for none
if penalized_loss is None:
# get default
penalized_loss = self.default_penalty
# create function
def custom_loss(y_true: tf.Tensor, y_pred: tf.Tensor) -> tf.Tensor:
"""Calculate reconstruction error and apply penalty."""
# calculate the dynamic mean reconstruction error on training data
reconstruction_loss = tf.reduce_mean(
tf.square(y_true - y_pred), axis=self.axis
)
# calculate penalized loss
return penalized_loss(
reconstruction_loss, self.encode_dim, self.regularization_factor
)
# optimize with tf.function
optimized_func: Callable[[tf.Tensor, tf.Tensor], tf.Tensor] = tf.function(
custom_loss
)
# get wrapped function
return optimized_func

Solution

Create an ABC for custom loss functions:

from abc import ABC
from abc import abstractmethod


class BaseCustomLossFunction(ABC):
    pass

Docker: Add GPU Support for Tensorflow

Problem

Currently the Docker image being built does not have the correct drivers to work with the version of tensorflow pinned in the pyproject.toml file:

tensorflow = ">=2.15.0"

Solution

What seems to work is installing tensorflow[and-cuda] via pip according to the Tensorflow install documentation. Now the question becomes: how to handle this optional install with the autoencoder library (which uses poetry as its build backend)?

References

Feature: Merging Reconstruction Error Histograms into Single Plot

Problem

Sometimes, in the course of investigating the reconstruciton error of an autoencoder on various datasets, it can be very useful to plot multiple histograms on the same plot (i.e. making it easier to see where the different datasets overlap, etc ...).

Solution

One tentative solution is to implement the __add__ magic method to effectively "merge" plots into one ...

References

Refactor: Rename Min2DAE to MinNDAE

Problem

Currently the Min2DAE model is presented as accepting "2D input" ... but it is actually capable of accepting "ND input".

Solution

Rename the class: Min2DAE -> MinNDAE.

Feature: Threshold Calculation

Problem

One of the applications of autoencoders is anomaly detection. Currently there is no code to help with the process of measuring the threshold of the anomaly error (where the anomaly begins and where the normal error ends).

Solution

Write some utility functions to help calculate the reconstruction error as well as the threshold.

References

Notebook: Feed Forward Models Demo

Problem

The Feed Forward architectures (i.e. MinAE and DeepAE) are best suited to text and tabular data but no demo notebooks currently use them for this task.

Solution

Implement notebooks to train FFNN/FCNN on tabular or text data.

Docker: Add GPU Version of Docker Image

Problem

Currently the Docker development environment (i.e. the ghcr.io/diogenesanalytics/autoencoder:master Docker image) does not have any GPU drivers for GPU accelerated applications (i.e. training neural networks with keras).

Solution

Can create multi-stage Docker image build to create a GPU version:

# jupyter base image
FROM jupyter/scipy-notebook:lab-4.0.0 as cpu-only

# first turn off git safe.directory
RUN git config --global safe.directory '*'

# turn off poetry venv
ENV POETRY_VIRTUALENVS_CREATE=false

# set src target dir
WORKDIR /usr/local/src/autoencoder

# get src
COPY . .

# get poetry in order to install development dependencies
RUN pip install poetry

# config max workers
RUN poetry config installer.max-workers 10

# now install development dependencies
RUN poetry install --with dev -C .

# additional GPU-enabled steps
FROM cpu-only as gpu-enabled

# install CUDA tools
RUN mamba install -c conda-forge cudatoolkit=11.8.0 && \
    mamba clean --all -f -y && \
    fix-permissions "${CONDA_DIR}" && \
    fix-permissions "/home/${NB_USER}"

# install separate pip libraries
RUN pip install nvidia-cudnn-cu11==8.6.0.163

# setting up CUDA library link
RUN export CUDNN_PATH=$(dirname \
    $(python -c "import nvidia.cudnn;print(nvidia.cudnn.__file__)")) && \
    ln -s ${CUDNN_PATH} ${CONDA_DIR}/lib/cudnn.ln

# setting dynamic link lib paths
ENV LD_LIBRARY_PATH=${CONDA_DIR}/lib/:${CONDA_DIR}/lib/cudnn.ln/lib

# host NVIDIA driver minimum version metadata
LABEL nvidia.driver.minimum_version="450.80.02"

Testing: Initial ABC Tests

Problem

Need to write simple unit tests for the autoencoder.model.base module and the two classes:

  • class BaseLayerParams(ABC):
    """Autoencoder layers hyperparameters configuration base class."""
    @abstractmethod
    def __init__(self, **kwargs: Dict[str, Any]) -> None:
    """Defines the argument type that the constructor should accept."""
    pass
    @property
    @abstractmethod
    def default_parameters(self) -> DefaultParams:
    """Defines the required default layer parameters attribute."""
    # NOTE: this dictionary sets layer order used to build the keras.Model
    pass
    def __post_init__(self) -> None:
    """Store updated params and get sequence index slices."""
    # get updated parameters for instance
    self._instance_parameters = self._build_instance_params()
    def _filter_layer_attrs(self) -> Generator[Tuple[str, Dict[str, Any]], None, None]:
    """Filter out layer attributes from class instance."""
    # get constructor signature
    init_sig = signature(self.__class__.__init__)
    # loop over layer_name/params in defaults
    for layer_id in self.default_parameters.keys():
    # now find corresponding layer_id in constructor args
    assert (
    layer_id in init_sig.parameters.keys()
    ), "Constructor arguments must match default_parameter dict keys."
    # finally get value of constructor args
    yield layer_id, self.__dict__[layer_id]
    def _update_layer_params(
    self,
    ) -> Generator[Tuple[Layer, Dict[str, Any]], None, None]:
    """Update default layer parameters values."""
    # get layer instance attrs and their values
    for attr, value in self._filter_layer_attrs():
    # unpack default parameters
    layer, params = self.default_parameters[attr]
    # check if none
    if value is not None:
    # merge instance onto default
    params |= value
    # generate
    yield layer, params
    def _build_instance_params(self) -> Tuple[Tuple[Layer, Dict[str, Any]], ...]:
    """Create mutable sequence of layer params for instance."""
    return tuple(self._update_layer_params())
  • @dataclass
    class BaseAutoencoder(ABC):
    """Autoencoder base class."""
    model_config: Optional[BaseLayerParams] = None
    def __post_init__(self) -> None:
    """Setup autoencoder model."""
    # check if default config used
    if self.model_config is None:
    # get default
    self.model_config = self._default_config
    # build model ...
    self.model = self._build_model()
    @property
    @abstractmethod
    def _default_config(self) -> BaseLayerParams:
    """Defines the default layer parameters attribute."""
    pass
    def _build_model(self) -> keras.Model:
    """Assemple autoencoder from encoder/decoder submodels."""
    # get pointer to instance parameters
    assert self.model_config is not None
    inst_params = self.model_config._instance_parameters
    # build model ...
    return keras.Sequential([layer(**params) for layer, params in inst_params])
    def summary(self, **kwargs: Any) -> None:
    """Wrapper for Keras model.summary method."""
    self.model.summary(**kwargs)
    def compile(self, **kwargs: Any) -> None:
    """Wrapper for Keras model.compile method."""
    self.model.compile(**kwargs)
    def fit(self, **kwargs: Any) -> None:
    """Wrapper for the Keras model.fit method."""
    self.model.fit(**kwargs)

References

Refactor: Implementing AnomalyDetector

Problem

Currently the autoencoder.data.evaluate.AutoencoderEvaluator class is being used to "evaluate" the autoencoder's reconstruction error on a dataset. But this is simply a precursor to "anomaly detection".

Solution

Need to start looking at refactoring the AutoencoderEvaluator into the AnomalyDetector class ...

Notebook: Enable Interactivity on Plots

Problem

Currently the plots generated in the notebooks (e.g. from the reconstruction error) are not "zoomable".

Solution

Update the notebooks with the following magic command: %matplotlib widget

References: ChatGPT Conversations

Feature: Model Training Checkpoints

Problem

Currently nothing has been implemented to take advantage of the checkpoint feature of keras. This is an extremely useful and practical feature.

Solution

One tentative solution is to add some checkpoint: bool = False default args to the constructor of BaseAutoencoder ... or something similar.

Feature: Dummy Model

Problem

When trying to determine the performance of a recently trained autoencoder it can be helpful to have a "baseline" to compare it to. This would require another autoencoder that is just randomly "reconstructing" the output from the inputs (i.e. it is not being trained to "learn" anything about how to compress the data, it is just randomly producing output based on input).

Solution

A dummy model is really what is needed for this specific problem.

References

Design: Initial Package Structure

Problem

Need to choose a package structure that makes sense for future development.

Solution

The tentative solution:

autoencoder/
    |_____ __init__.py
    |_____ model/
            |_____ base.py
            |_____ minimal.py

Feature: Preprocessing

Feature: Mutable Autoencoder

Problem

Might be good to have a single autoencoder class that is completely customizable (i.e. you can not only configure the layer parameters, but also change the layer type).

Solution

The tentative solution:

from .base import BaseLayerParams
from .base import BaseAutoencoder


class MutableLayerParams(BaseLayerParams):
    pass

class MutableAutoencoder(BaseAutoencoder):
    pass

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.