Code Monkey home page Code Monkey logo

gradient-centralization's Introduction

Gradient Centralization


Introduction

  • Gradient Centralization (GC) is a simple and effective optimization technique for Deep Neural Networks (DNNs), which operates directly on gradients by centralizing the gradient vectors to have zero mean. It can both speedup training process and improve the final generalization performance of DNNs. GC is very simple to implement and can be easily embedded into existing gradient based DNN optimizers with only few lines of code. It can also be directly used to finetune the pre-trained DNNs. Please refer to the algorithm-GC to obtain the codes of more advanced optimizers.
Illustration of the GC operation on gradient matrix/tensor of weights in the fully-connected layer (left) and convolutional layer (right).
  • GC can be viewed as a projected gradient descent method with a constrained loss function. The Lipschitzness of the constrained loss function and its gradient is better so that the training process becomes more efficient and stable. Our experiments on various applications, including general image classification, fine-grained image classification, detection and segmentation and Person ReID demonstrate that GC can consistently improve the performance of DNN learning.
  • The optimizers are provided in the files: SGD.py, Adam.py and Adagrad.py, including SGD_GC, SGD_GCC, SGDW_GCC, Adam_GC, Adam_GCC, Adam_GCC2, AdamW_GCC, AdamW_GCC2 and Adagrad_GCC. The optimizers with "_GC" use GC for both Conv layers and FC layers, and the optimizers with "_GCC" use GC only for Conv layers. For adaptive learning rate methods, keeping mean of weight vector unchanged usually works better. Please refer to Adam_GCC2 and AdamW_GCC2. We can use the following codes to import SGD_GC:
from SGD import SGD_GC 

Update

  • 2020/04/07:Release a pytorch implementation of optimizers with GC, and provide some examples on classification task, including general image classification (Mini-ImageNet, CIFAR100 and ImageNet) and Fine-grained image classification (FGVC Aircraft, Stanford Cars, Stanford Dogs and CUB-200-2011).

  • 2020/04/14:Release the code of GC on MMdetection and update some tables of experimental results.

  • 2020/05/07:Release the code of GC on Person ReID and show some results on Market1501.

  • 2020/08/08:Release the code of some advanced optimizers with GC.


Citation

@article{GradientCentra,
  title={Gradient-Centralization: A New Optimization Technique for Deep Neural Networks},
  author={Hongwei Yong and Jianqiang Huang and Xiansheng Hua and Lei Zhang},
  booktitle={the European Conference on Conputer Vision},
  year={2020}
}

Link to the other implementation of GC

Experiments


General Image Classification

  • Mini-ImageNet

The codes are in GC_code/Mini_ImageNet. The split dataset can be downloaded from here (Google drive) or here (Baidu drive, safe code: 1681). The following figure is training loss (left) and testing accuracy (right) curves vs. training epoch on the Mini-ImageNet. The ResNet50 is used as the DNN model. The compared optimization techniques include BN, BN+GC, BN+WS and BN+WS+GC.

  • CIFAR100

The codes are in GC_code/CIFAR100.

  • ImageNet

The codes are in GC_code/ImageNet. The following table is the Top-1 error rates on ImageNet w/o GC and w/ GC:

Backbone R50BN R50GN R101BN R101GN
w/o GC 23.71 24.50 22.37 23.34
w/ GC 23.21 23.53 21.82 22.14

The following figure is the training error (left) and validation error (right) curves vs. training epoch on ImageNet. The DNN model is ResNet50 with GN.


Fine-grained Image Classification

The codes are in GC_code/Fine-grained_classification. The preprocessed dataset can be downloaded from here. The following table is the testing accuracies on the four fine-grained image classification datasets with ResNet50:

Datesets FGVC Aircraft Stanford Cars Stanford Dogs CUB-200-2011
w/o GC 86.62 88.66 76.16 82.07
w/ GC 87.77 90.03 78.23 83.40

The following figure is the training accuracy (solid line) and testing accuracy (dotted line) curves vs. training epoch on four fine-grained image classification datasets:


Objection Detection and Segmentation

The codes are in MMdetection. Please let SGD.py in MMdetection\tools\, and update MMdetection\tools\train.py. Then if you want use SGD_GC optimizer, just update optimizer in the configs file. For example, if we want use SGD_GC to optimize Faster_RCNN with ResNet50 backbone and FPN, we update the 151th line in MMdetection/configs/faster_rcnn_r50_fpn_1x.py. The following table is the detection results on COCO by using Faster-RCNN and FPN with various backbone models:

Method Backbone AP AP.5 AP.75 Backbone AP AP.5 AP.75
w/o GC R50 36.4 58.4 39.1 X101-32x4d 40.1 62.0 43.8
w/ GC R50 37.0 59.0 40.2 X101-32x4d 40.7 62.7 43.9
w/o GC R101 38.5 60.3 41.6 X101-64x4d 41.3 63.3 45.2
w/ GC R101 38.9 60.8 42.2 X101-64x4d 41.6 63.8 45.4

The following table is the detection and segmentation results on COCO by using Mask-RCNN and FPN with various backbone models:

Method Backbone APb APb.5 APb.75 APm APm.5 APm.75
w/o GC R50 37.4 59.0 40.6 34.1 55.5 36.1
w/ GC R50 37.9 59.6 41.2 34.7 56.1 37.0
w/o GC R101 39.4 60.9 43.3 35.9 57.7 38.4
w/ GC R101 40.0 61.5 43.7 36.2 58.1 38.7
w/o GC X101-32x4d 41.1 62.8 45.0 37.1 59.4 39.8
w/ GC X101-32x4d 41.6 63.1 45.5 37.4 59.8 39.9
w/o GC X101-64x4d 42.1 63.8 46.3 38.0 60.6 40.9
w/ GC X101-64x4d 42.8 64.5 46.8 38.4 61.0 41.1
w/o GC R50 (4c1f) 37.5 58.2 41.0 33.9 55.0 36.1
w/ GC R50 (4c1f) 38.4 59.5 41.8 34.6 55.9 36.7
w/o GC R101GN 41.1 61.7 44.9 36.9 58.7 39.3
w/ GC R101GN 41.7 62.3 45.3 37.4 59.3 40.3
w/o GC R50GN+WS 40.0 60.7 43.6 36.1 57.8 38.6
w/ GC R50GN+WS 40.6 61.3 43.9 36.6 58.2 39.1

Person ReId

The codes are in PersonReId. Please let SGD.py in reid-strong-baseline\tools\, and update reid-strong-baseline\solver\build.py. For Market1501, please use SGD_GCC algorithm with learning rate 0.03 or 0.02 and weight decay 0.002. For example, you can change the '.sh' file with the following codes:

python3 tools/train.py --config_file='configs/softmax_triplet_with_center.yml' MODEL.DEVICE_ID "('0')" DATASETS.NAMES "('market1501')" DATASETS.ROOT_DIR "('/home/yonghw/data/reid/')" OUTPUT_DIR "('out_dir/market1501/test')" SOLVER.OPTIMIZER_NAME "('SGD_GCC')" SOLVER.BASE_LR "(0.03)" SOLVER.WEIGHT_DECAY "(0.002)" SOLVER.WEIGHT_DECAY_BIAS "(0.002)"

The results of Market1501 without reranking are shown in the following table:

Method Backbone MAP Top 1
Adam* R18 77.8 91.7
SGD_GCC R18 81.3 92.7
Adam* R50 85.9 94.5
SGD_GCC R50 86.6 94.8
Adam* R101 87.1 94.5
SGD_GCC R101 87.9 95.0

The results with * are reported by the authors in reid-strong-baseline. Our reproduced results are slightly lower than the results provided by the authors.

gradient-centralization's People

Contributors

yonghongwei avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gradient-centralization's Issues

没有提升issue

求问作者,是不是深度越浅就越没有提升啊?我在ResNet上apply,用的FER2013表情database,用SGD大概72% accuracy左右,改用SGD_GC后基本一样,甚至还低了...用的是Res18,要不我再用Res34对比一下?

关于语义分割的问题

Hi,
@Yonghongwei

在实例分割里面是有FC层作为分类,所以应该使用Adam_GC
但是我使用在语义分割模型中,是没有FC层的,所以我应该使用Adam_GCC
我在语义分割模型里面加了一些 Attention模块后,里面带有一些nn.Linear()层,我现在应该使用_GCC or _GC

感谢回答!

incorrect keywords: memory_format

I tried to use Adam_GC, but got the errors.

Pytorch version: 1.3.0

  File "Adam.py", line 82, in step
    state['exp_avg'] = torch.zeros_like(p, memory_format=torch.preserve_format)
TypeError: zeros_like() received an invalid combination of arguments - got (Parameter, memory_format=torch.memory_format), but expected one of:
 * (Tensor input, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
 * (Tensor input, bool requires_grad)
      didn't match because some of the keywords were incorrect: memory_format

function grad.mean() Error

sorry
I got an Error
in optimizer GCC

grad.add_(-grad.mean(dim = tuple(range(1,len(list(grad.size())))), keepdim = True))

TypeError: mean received an invalid combination of arguments - got (keepdim=bool, dim=tuple, ), but expected one of:

  • no arguments
  • (int dim)
  • (int dim, bool keepdim)
    didn't match because some of the arguments have invalid types: (dim=tuple, keepdim=bool, )

dim should not be tuple right?
my pytorch vision 0.3.1

可以直接使用在3D卷积网络里吗?

一开始有直接使用作者的代码在3D网络,但后来细看论文,仅从公式出发的话,求得梯度均值是针对2D网络而言的,所以温习下对于3D网络,那行求均值的代码,仍然可行吗?

Question regarding GC for convolutions

Hi, thanks for providing an easily accessible implementation of your method!

I have a question regrading the implementation of GC for the convolutional layers.
Looking at the figure in the paper, it seems as though when calculating the mean for the convolutional layers, we expect to average over C_in.
image
So for instance, in a (64, 32, 3, 3) convolution layer, we would expect to get a tensor of size (64, 1, 3, 3). I'm assuming here that k_1 and k_2 in the figure corresponds to kernel size, so in this case they would both be 3.

However, when I look at the implementation in this repository, the tensor produced by GC for centralization is not of size (64, 1, 3, 3), but rather (64, 1, 1, 1). For instance, I put together a quick script with a simple CNN, and looked at the gradients:

>>> model.conv2.weight.shape
torch.Size([32, 16, 3, 3])
>>> dc = model.conv2.weight.grad
>>> dc.shape
torch.Size([32, 16, 3, 3])

# Taking the code from this implementation
>>> mean = dc.mean(dim=tuple(range(1, len(list(dc.size())))), keepdim=True)
>>> mean.shape
torch.Size([32, 1, 1, 1])

The reason for this is dim=tuple(range(1, len(list(dc.size())))). For the conv gradient, the tuple expands to (1, 2, 3), so it averages over everything except C_out. Is this correct? To get a tensor of size (64, 1, 3, 3), which seems to be what the paper figure is showing, we would need to only average over dim=1.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.