anuragarnab / caffe-fold-batchnorm Goto Github PK
View Code? Open in Web Editor NEWFolds batch normalisation and the following scale layer into a single scale layer for networks trained in Caffe. This can be done at inference time to reduce memory consumption.