Code Monkey home page Code Monkey logo

biotorch's Introduction

BioTorch

BioTorch is a PyTorch framework specializing in biologically plausible learning algorithms


Build Status PRs Welcome

BioTorch Provides:

🧠   Implementations of layers, models and biologically-motivated learning algorithms. It allows to load existing state-of-the-art models, easy creation of custom models and automatic conversion of existing models.

🧠   A framework to train, evaluate and benchmark different biologically plausible learning algorithms in a selection of datasets. It is focused on the principles of PyTorch design and research reproducibility. Configuration files that include the choice of a fixed seed and deterministic math and CUDA operations are provided.

🧠   A place of colaboration, ideas sharing and discussion.

Methods Supported

Feedback Alignment

Name Mode Official Implementations
Feedback Alignment 'fa' N/A
Direct Feedback Alignment 'dfa' [Torch]
Sign Symmetry ['usf', 'brsf', 'frsf'] [PyTorch]

Metrics Supported

Layer Weight Alignment Layer Weight Norm Ratio
angles_adam weight_norm

Quick Tour

Create a Feedback Aligment (FA) ResNet-18 model

from biotorch.models.fa import resnet18
model = resnet18()

Create a custom model with uSF layers

import torch.nn.functional as F
from biotorch.layers.usf import Conv2d, Linear

class Model(nn.Module):
  def __init__(self):
    super(Model, self).__init__()
    self.conv1 = Conv2d(in_channels=64, out_channels=128, kernel_size=3)
    self.fc = Linear(in_features=256, out_features=10)

  def forward(self, x):
    out = F.relu(self.conv1(x))
    out = F.avg_pool2d(out, out.size()[3])
    return self.fc(out)
    
model = Model()

Automatically convert AlexNet to use the "frSF" algorithm

from torchvision.models import alexnet
from biotorch.module.biomodule import BioModule

model = BioModule(module=alexnet(), mode='frsf')

Run an experiment on the command line

python benchmark.py --config benchmark_configs/mnist/fa.yaml

If you want the experiment to be reproducible, check that you have specified a seed and the parameter deterministicis set to true in the configuration file yaml. That will apply all the PyTorch reproducibility steps. If you are running your experiment on GPU add the extra environment variable CUBLAS_WORKSPACE_CONFIG.

CUBLAS_WORKSPACE_CONFIG=:4096:8 python benchmark.py --config benchmark_configs/mnist/fa.yaml

Click here to learn more about the configuration file API.

Run an experiment on a Colab Notebook

Installation

We are hosted in PyPI, you can install the library using pip:

pip install biotorch

Or from source:

git clone https://github.com/jsalbert/biotorch.git
cd biotorch
script/setup

Benchmarks

MNIST

CIFAR-10

ImageNet

Contributing

If you want to contribute to the project please read the CONTRIBUTING section. If you found any bug please don't hesitate to comment in the Issues section.

Related paper: Benchmarking the Accuracy and Robustness of Feedback Alignment Algorithms

Albert Jiménez Sanfiz, Mohamed Akrout

Backpropagation is the default algorithm for training deep neural networks due to its simplicity, efficiency and high convergence rate. However, its requirements make it impossible to be implemented in a human brain. In recent years, more biologically plausible learning methods have been proposed. Some of these methods can match backpropagation accuracy, and simultaneously provide other extra benefits such as faster training on specialized hardware (e.g., ASICs) or higher robustness against adversarial attacks. While the interest in the field is growing, there is a necessity for open-source libraries and toolkits to foster research and benchmark algorithms. In this paper, we present BioTorch, a software framework to create, train, and benchmark biologically motivated neural networks. In addition, we investigate the performance of several feedback alignment methods proposed in the literature, thereby unveiling the importance of the forward and backward weight initialization and optimizer choice. Finally, we provide a novel robustness study of these methods against state-of-the-art white and black-box adversarial attacks.

Preprint here, feedback welcome!

Contact: [email protected]

If you use our code in your research, you can cite our paper:

@misc{sanfiz2021benchmarking,
      title={Benchmarking the Accuracy and Robustness of Feedback Alignment Algorithms},
      author={Albert Jiménez Sanfiz and Mohamed Akrout},
      year={2021},
      eprint={2108.13446},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

biotorch's People

Contributors

jsalbert avatar mahbodnr avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.