I have created a Multilayer Neural Network from scratch without using PyTorch, Tensorflow, Caffe, and KERAS to accomplish a multi-class classification task.
- NumPy
- Matplotlib
- Multiple hidden layers
- Tanh activation
- Sigmoid activation
- ReLU activation
- LReLU activation
- SoftMax activation
- L2 Weight decay
- Mini batch SGD
- Momentum
- Dropout
- Cross-entropy loss
- Batch Normalisation
- k-fold cross validation
- Data normalisation
- Training data- https://drive.google.com/file/d/1cBHIpe9utppjZmn-DROwQN26NA-S0xXE/view?usp=sharing
- Training labels - https://drive.google.com/file/d/1lvEa6xHnLNfnZ5TeYl6voWebIuC0OpaB/view?usp=sharing
- Test data - https://drive.google.com/file/d/1cBHIpe9utppjZmn-DROwQN26NA-S0xXE/view?usp=sharing
- https://towardsdatascience.com/nothing-but-numpy-understanding-creating-binary-classification-neural-networks-with-e746423c8d5c
- https://stats.stackexchange.com/questions/370723/how-to-calculate-the-derivative-of-crossentropy-error-function
- https://deepnotes.io/softmax-crossentropy
- https://www.mldawn.com/binary-classification-from-scratch-using-numpy/
- http://florianmuellerklein.github.io/nn/
- https://gombru.github.io/2018/05/23/cross_entropy_loss/
- https://stats.stackexchange.com/questions/235528/backpropagation-with-softmax-cross-entropy
- https://www.python-course.eu/neural_networks_with_dropout.php
- https://wiseodd.github.io/techblog/2016/06/25/dropout/
- https://stats.stackexchange.com/questions/266968/how-does-minibatch-gradient-descent-update-the-weights-for-each-example-in-a-bat
- https://wiseodd.github.io/techblog/2016/07/04/batchnorm/
- https://gluon.mxnet.io/chapter06_optimization/gd-sgd-scratch.html#Stochastic-gradient-descent
- https://github.com/Erlemar/cs231n_self/blob/master/assignment2/cs231n/layers.py#L116
- https://towardsdatascience.com/lets-code-a-neural-network-in-plain-numpy-ae7e74410795
- https://github.com/jorgenkg/python-neural-network/blob/master/nimblenet/activation_functions.py
- https://chrisyeh96.github.io/2017/08/28/deriving-batchnorm-backprop.html
- https://deepnotes.io/batchnorm
- https://github.com/parasdahal/deepnet/blob/master/deepnet/loss.py