Code Monkey home page Code Monkey logo

breg-next's Introduction

BReG-NeXt

Implementation of the paper BReG-NeXt: Facial Affect Computing Using Adaptive Residual Networks With Bounded Gradient

BReG-NeXt paper can be found on IEEE Xplore and arXiv

overview

Requirements

Tensorflow 1.14.0 is suggested to run the code. For installing the rest of the required packages, run the following command:

pip install -r requirements.txt

Content

  • tfrecords: Sample tfrecords for training and validation from FER2013 database
  • Snapshots: Example model trained on AffectNet database on BReG-NeXt-50
  • Logs: Log report of the BReG-NeXt-50 trained model on AffectNet database

Pre-Trained Parameters

As mentioned above trained parameters on the AffectNet database on BReG-NeXt-50 is provided in the Snapshots folder. You can find various trained parameter values in this file. More specifically, the adaptive coefficient (alpha and beta) are stored in variables that have the following regex:

ResidualBlock[_]*[1-9]*/shortcut_mod[_]*[1-9]*/[a,c][_]*[1-9]*

How To Run

Simply run the BReG-NeXt.py file:

codes/>> python BReG-NeXt.py

Try on binder or google colab (with GPU or TPU)

Binder

Write Your Own Complex Mapping

To write your own customized complex mapping (given the restrictions and properties mentioned in the BReG-NeXt paper), you need to modify Lines 136 to 140 of the BReG-NeXt.py file. Simply write your own function for the mapping and assign it to the identity variable.

 with tf.name_scope('shortcut_mod'): # Write your customized function
  multiplier1 = tf.Variable(1, dtype=tf.float32, trainable=True, name='alpha') # first optional coefficient
  multiplier2 = tf.Variable(1, dtype=tf.float32, trainable=True, name='beta') # second optional coefficient
  with tf.name_scope('shortcut_mod_function'): 
    identity = your_function(multiplier1, multiplier2) # must assign your function to the 'identity' variable

Citation

All submitted papers (or any publically available text) that uses the entire or parts of this code, must cite the following paper:

B. Hasani, P. S. Negi and M. Mahoor, "BReG-NeXt: Facial affect computing using adaptive residual networks with bounded gradient," in IEEE Transactions on Affective Computing, 2020.

BibTex:

@ARTICLE{9064942,  author={B. {Hasani} and P. S. {Negi} and M. {Mahoor}},  journal={IEEE Transactions on Affective Computing},  title={BReG-NeXt: Facial affect computing using adaptive residual networks with bounded gradient},   year={2020},  volume={},  number={},  pages={1-1},}

breg-next's People

Contributors

behzadhsni avatar psnegi avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.