Code Monkey home page Code Monkey logo

Comments (50)

kyehyeon avatar kyehyeon commented on July 26, 2024 4

@zimenglan-sisu-512 @xiaoxiongli

It's just a simple math.
Given the parameters for each layer as:

conv layer: conv_weight, conv_bias
bn layer: bn_mean, bn_variance, num_bn_samples
scale layer: scale_weight, scale_bias

Let us define a vector 'alpha' of scale factors for conv filters:

alpha = scale_weight / sqrt(bn_variance / num_bn_samples + eps)

If we set conv_bias and conv_weight as:

  1. conv_bias = conv_bias * alpha + (scale_bias - (bn_mean / num_bn_samples) * alpha)
  2. for i in range(len(alpha)):
    conv_weight[i] = conv_weight[i] * alpha[i]

Then we can get the same result compared to that with the original network by setting bn and scale parameters as:

bn_mean[...] = 0
bn_variance[...] = 1
num_bn_samples = 1

scale_weight[...] = 1
scale_bias[...] = 0

thus we can simply remove bn and scale layers.

The code is not opened, but you can easily implement a script to do this.

from pva-faster-rcnn.

xiaoxiongli avatar xiaoxiongli commented on July 26, 2024 3

@hengck23 @sanghoon

using full/test.model: Mean AP = 0.8385, 92ms in K40
using full/original.model: Mean AP = 0.8385(same as above), 110ms in K40

from pva-faster-rcnn.

xiaoxiongli avatar xiaoxiongli commented on July 26, 2024 3

@sanghoon @hengck23 @swearos Dear sanghong:
it is very kind of you, your script seems work fine. thank you!

and i test the model/pvanet/full model:

I use GPU K40:
before your script: 110ms

after your script:
without cudnn: 93ms
with cudnn: 91ms (1x1 convolution layer use caffe engine)

thank you!^_^

from pva-faster-rcnn.

hengck23 avatar hengck23 commented on July 26, 2024 2

@ xiaoxiongli
https://github.com/terrychenism/NeuralNetTests/blob/master/caffe_utils/gen_bn_inference_v2.py
https://github.com/terrychenism/NeuralNetTests/blob/master/caffe_utils/gen_bn_inference.py

   #Absorb the BN parameters
    weights = caffe.Net(args.model, args.weights, caffe.TEST)
    for i, layer in enumerate(model.layer):
    if layer.name not in to_be_absorbed: continue
    scale, bias, mean, var = [p.data.ravel() for p in weights.params[layer.name]]

    eps = 1e-5
    invstd = 1./np.sqrt( var + eps )
    invstd = invstd*scale

    for j in xrange(i - 1, -1, -1):
        bottom_layer = model.layer[j]
        if layer.bottom[0] in bottom_layer.top:
            W, b = weights.params[bottom_layer.name]
            num = W.data.shape[0]
            if bottom_layer.type == 'Convolution':
                W.data[...] = (W.data * invstd.reshape(num,1, 1,1))
                 b.data[...] = (b.data[...] - mean) * invstd + bias

from pva-faster-rcnn.

sanghoon avatar sanghoon commented on July 26, 2024 2

Hi @hengck23 @xiaoxiongli @swearos

I've committed a simple script to merge 'Conv-BN-Scale' layers into a single Conv layer.
Please checkout 39570aa.

Please note that it seemed work correctly. However, I haven't tested it thoroughly.
I'd appreciate it if you could give your feedback on this.

from pva-faster-rcnn.

zimenglan-sysu-512 avatar zimenglan-sysu-512 commented on July 26, 2024 2

hi @sanghoon ,
i find that if i change np.finfo(np.double).eps to 1e-5 which is the default value in BN layer, i can get the right results.
thanks.

from pva-faster-rcnn.

zimenglan-sysu-512 avatar zimenglan-sysu-512 commented on July 26, 2024 1

@sanghoon can you share some experience how to merge BN and Scale Layer into Conv Layer?

from pva-faster-rcnn.

PapaMadeleine2022 avatar PapaMadeleine2022 commented on July 26, 2024 1

anyone can provide some complete example code for tensorflow about how to merge conv layer and bn layer to one conv layer?

from pva-faster-rcnn.

sanghoon avatar sanghoon commented on July 26, 2024

Hi @weishengchong,

I'm afraid that I can't answer the exact difference between those two options right now.

Talking about training time,
when I finetune a network after I eliminate BN, scale layers (by merging, if possible),
the iterations becomes 25% faster.

IMO, the impact of removing BN layers won't be that significant during test time.

from pva-faster-rcnn.

VincentChong123 avatar VincentChong123 commented on July 26, 2024

Hi @sanghoon,

Thanks for sharing.

For our googlenet_bn, we are trying to merge bn into conv and bias, then
share blob memory between conv layers and merge CBR layers.

from pva-faster-rcnn.

quietsmile avatar quietsmile commented on July 26, 2024

Very useful information! Thanks the discussion.
@weishengchong What is CBR layer?

from pva-faster-rcnn.

VincentChong123 avatar VincentChong123 commented on July 26, 2024

Hi @quietsmile, for merging CBR (convolution, bias, relu), refer to Nvidia GIE figure 3~5.

from pva-faster-rcnn.

zimenglan-sysu-512 avatar zimenglan-sysu-512 commented on July 26, 2024

hi @weishengchong can you share some information about how to use the opt model after GIE? especially for detection task. thanks.

from pva-faster-rcnn.

xiaoxiongli avatar xiaoxiongli commented on July 26, 2024

@sanghoon Dear sanghoon, I have the same confuse about how to merge BN and Scale Layer into Conv Layer, I read your caffe code modifies, and I find that your Conv layers code have no modifies comparing to the main caffe branch.

from pva-faster-rcnn.

xiaoxiongli avatar xiaoxiongli commented on July 26, 2024

@kyehyeon thank you very much! I got it...

Now in the github:
For example_train_384\train.prototxt --> original prototxt, so I can train without modify conv layer
For example_finetune\train.prototxt --> merge BN and Scale Layer into Conv Layer, so I can NOT train unless I modify the Caffe's Conv Layer according to what you have say above.

Right? ^_^

from pva-faster-rcnn.

VincentChong123 avatar VincentChong123 commented on July 26, 2024

@zimenglan-sisu-512

Do you mean how to use gie for detection task?

On Oct 12, 2016 4:03 PM, "zimenglan" [email protected] wrote:

hi @weishengchong https://github.com/weishengchong can you share some
information about how to use the opt model after GIE? especially for
detection task. thanks.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#5 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AQyyHFtjwbsk_9G4OTdYrCbFAIAcLP4qks5qzJRJgaJpZM4KIr_K
.

from pva-faster-rcnn.

VincentChong123 avatar VincentChong123 commented on July 26, 2024

Hi @sanghoon,

After merging for 25% speed up, any side effect on training accuracy?

On Oct 10, 2016 10:06 PM, "Sanghoon Hong" [email protected] wrote:

Hi @weishengchong https://github.com/weishengchong,

I'm afraid that I can't answer the exact difference between those two
options right now.

Talking about training time,
when I finetune a network after I eliminate BN, scale layers (by merging,
if possible),
the iterations becomes 25% faster.

IMO, the impact of removing BN layers won't be that significant during
test time.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#5 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AQyyHMAQrRyhaL4vJVzqbrHumWPMqTaTks5qykZ8gaJpZM4KIr_K
.

from pva-faster-rcnn.

zimenglan-sysu-512 avatar zimenglan-sysu-512 commented on July 26, 2024

@weishengchong yes, i have tried, but failed.

from pva-faster-rcnn.

VincentChong123 avatar VincentChong123 commented on July 26, 2024

Hi @zimenglan-sysu-512,

I haven't try it. What was your GIE error message?

FYI, at GTC Taipei last month, page below was introduced to TensorRT (=GIE) users.
https://github.com/dusty-nv/jetson-inference

Have you tried reproducing the tutorial results?

from pva-faster-rcnn.

sanghoon avatar sanghoon commented on July 26, 2024

@xiaoxiongli
You can still fine-tune the network. You just can't batch normalize training data.
However, in the Faster R-CNN training, we hasn't used batch normalization at all.
The result will be almost the same.

@weishengchong
I haven't compared the two cases.
I guess there will be no harmful effect.
On the contrary, merging scale-bias layer may improve the resulted accuracy.
It's something I'm planing to take a shot for.

from pva-faster-rcnn.

zimenglan-sysu-512 avatar zimenglan-sysu-512 commented on July 26, 2024

@weishengchong yes i have followed this instructions, but since i want to use if based on py-faster-rcnn, so i don't know how to do it.

from pva-faster-rcnn.

baiyancheng20 avatar baiyancheng20 commented on July 26, 2024

@weishengchong @quietsmile @zimenglan-sysu-512 @xiaoxiongli Have you guys implemented the code to merge the BN layer into the Conv layer?

from pva-faster-rcnn.

baiyancheng20 avatar baiyancheng20 commented on July 26, 2024

@sanghoon Could you release the code for merging BN layers into Conv layers and the scripts to generate the prototxt of networks?

from pva-faster-rcnn.

hengck23 avatar hengck23 commented on July 26, 2024

@weishengchong,@sanghoon
Here are my comparison results (python, windows,GTX 1080)

compare

from pva-faster-rcnn.

xiaoxiongli avatar xiaoxiongli commented on July 26, 2024

@hengck23 Dear hengck23, i see your inference result -- 15ms is really amazing! Which way you get this result: using GIE or using your own modifiy?

from pva-faster-rcnn.

hengck23 avatar hengck23 commented on July 26, 2024

@ xiaoxiongli
I use the caffe code from this website. I use cuda 8 /cudnn5.1
I havent use GIE/tensorRT yet (but is doing tensorRT this week).

from pva-faster-rcnn.

xiaoxiongli avatar xiaoxiongli commented on July 26, 2024

@hengck23 which website..? Do you mean pvanet's caffe branch? as i know, the pvanet's caffe do not implement the code of merging BN/Scale Layer to Convolution Layer.

from pva-faster-rcnn.

hengck23 avatar hengck23 commented on July 26, 2024

@ xiaoxiongli
there is no code for merging. But both the original and merge models are provided.

from pva-faster-rcnn.

hengck23 avatar hengck23 commented on July 26, 2024

@ xiaoxiongli
As a reference i also provide zfnet speed. I retrain zfnet from ross's faster-rcnn using pvanet-faster-rcnn here.
zfnet

from pva-faster-rcnn.

xiaoxiongli avatar xiaoxiongli commented on July 26, 2024

@hengck23 Dear heagck23, I know that the merge models are provided, but when i carefully read the pvanet's caffe branch code, I find that the Conv layers code have no modifies comparing to the main caffe branch. Do you mean the pvanet's caffe branch code already merge BN/Scale Layer to Convolution Layer? but i can not find where it is...

So if i want to reproduce your 15ms result, i need implement the "merge BN/Scale Layer to Convolution Layer" code by myself, am i right?

from pva-faster-rcnn.

hengck23 avatar hengck23 commented on July 26, 2024

@ xiaoxiongli
oringinal prototxt: conv -->bn--> relu --> ....
after merginging, test prototxt: conv --> relu --> ....

The conv layers implementation are the same, i.e. same source code.
But the parameter values changes, i.e. the caffemodel file change.

To reproduce 15ms result, just use the new caffemodel file: test.pt & test_690K.model

from pva-faster-rcnn.

xiaoxiongli avatar xiaoxiongli commented on July 26, 2024

@hengck23 Dear hengck23, where can i find the new caffemodel file "test.pt & test_690K.model" that you mentioned above? Would you Please help..

And Which "train.pt" file you used to re-train? ^_^

from pva-faster-rcnn.

hengck23 avatar hengck23 commented on July 26, 2024

@ xiaoxiongli
please refer to:
https://github.com/sanghoon/pva-faster-rcnn/blob/master/models/pvanet/download_lite_models.sh
https://github.com/sanghoon/pva-faster-rcnn/tree/master/models/pvanet/lite

models/pvanet/lite/test.model
models/pvanet/lite/original.model
test.pt
original.pt

These are the 2 models that i obtain 15 ms and 35 ms respectively.

For the zfnet, you have to modify from the original one.
zfnet_1

train.prototxt.txt

from pva-faster-rcnn.

karthikmswamy avatar karthikmswamy commented on July 26, 2024

Thanks @sanghoon for sharing this framework with the community.

I set up PVA-Net on my TX1 and ran the lite version successfully.
Out of the box, the net forward pass takes 243 ms. However, when you run TX1 at max performance, the net forward pass takes 184 ms. You get a ~60 ms speed improvement just by running your TX1 at max performance.

from pva-faster-rcnn.

xiaoxiongli avatar xiaoxiongli commented on July 26, 2024

@sanghoon Dear sanghoon, you said that after merge the BN and Scale layer to Conv layer, the training iterations becomes 25% faster. How about the inference time?

from pva-faster-rcnn.

hengck23 avatar hengck23 commented on July 26, 2024

@ xiaoxiongli
https://github.com/e-lab/torch-toolbox/blob/master/BN-absorber/BN-absorber.lua

Batch normalization applies linear transformation to input in evaluation phase. It can be absorbed in following convolution layer by manipulating its weights and biases.

from pva-faster-rcnn.

swearos avatar swearos commented on July 26, 2024

@hengck23 ,thanks for your kind help.I only find three param under the BatchNorm layer in original.pt. The code you mentioned need four param,"scale, bias, mean, var",how could i solver the problem.
`
layer {
name: "conv1/bn"
type: "BatchNorm"
bottom: "conv1"
top: "conv1"
param { lr_mult: 0 decay_mult: 0 }#scale
param { lr_mult: 0 decay_mult: 0 } #shift/bias
param { lr_mult: 0 decay_mult: 0 } #global mean
#global var???
batch_norm_param { use_global_stats: true }
}

`

from pva-faster-rcnn.

xiaoxiongli avatar xiaoxiongli commented on July 26, 2024

@kyehyeon Dear kyehyeon, in your reply, you said that:

conv_bias = conv_bias * alpha + (scale_bias - (bn_mean / num_bn_samples) * alpha)

and i wonder how can you inference above formula, in the Batch Normalize's paper, the author said that:

image

So, what i get is:
conv_bias = (scale_bias - (bn_mean / num_bn_samples) * alpha), How Can we get the first item(conv_bias * alpha)?

but in your reply and the hengck23's code above: @sanghoon @hengck23

        if bottom_layer.type == 'Convolution':
            W.data[...] = (W.data * invstd.reshape(num,1, 1,1))
             **b.data[...] = (b.data[...] - mean) * invstd + bias**

i do not know what's wrong... I feel so confuse, please help, thank you very much ^_^

from pva-faster-rcnn.

hengck23 avatar hengck23 commented on July 26, 2024

@kyehyeon @ xiaoxiongli
Note that:

  • caffe may not implement the paper directly (i haven't check in details yet)
  • be careful of the wordings used in different code. "scale" in one code may not be the same thing in another code or the paper.
  • different version of caffe implement BN layer differently.
    (be careful to read the correct documentation)
    if i am not wrong, in current pvanet :
  • paper uses "mean, var, gamma, beta"
  • caffe bn layer uses: "mean, var, scale"
  • caffe scale layer uses: "multipler, offset"
  • combination of caffe bn layer +scale layer is used to implement the paper batch normalisation

What kyehyeon says above is correct. Please modify the python code based on his comments.
If you look at "batch_norm_layer.cpp"

  const Dtype scale_factor = this->blobs_[2]->cpu_data()[0] == 0 ?
    0 : 1 / this->blobs_[2]->cpu_data()[0];
 caffe_cpu_scale(variance_.count(), scale_factor,
    this->blobs_[0]->cpu_data(), mean_.mutable_cpu_data());
 caffe_cpu_scale(variance_.count(), scale_factor,
    this->blobs_[1]->cpu_data(), variance_.mutable_cpu_data());

I deduce:
layer {
name: "conv1/bn"
type: "BatchNorm"
bottom: "conv1"
top: "conv1"
param { lr_mult: 0 decay_mult: 0 } #mean
param { lr_mult: 0 decay_mult: 0 } #var
param { lr_mult: 0 decay_mult: 0 } #scale
batch_norm_param { use_global_stats: true }
}

from pva-faster-rcnn.

xiaoxiongli avatar xiaoxiongli commented on July 26, 2024

@hengck23 dear hengck23, i agree with you. ^_^ , but what i care about is How can we deduce the formula: conv_bias = conv_bias * alpha + (scale_bias - (bn_mean / num_bn_samples) * alpha), especially for the first item.

from pva-faster-rcnn.

sanghoon avatar sanghoon commented on July 26, 2024

Hi @hengck23 @xiaoxiongli ,
The params in BatchNorm layer contains the following data, repectively:

  • mean
  • variance
  • normalization factor (for moving average)

The average mean is computed by (mean) over (normalization factor).
I'm working on writing a short script for merging Bn layers

from pva-faster-rcnn.

xiaoxiongli avatar xiaoxiongli commented on July 26, 2024

Dear @hengck23 @sanghoon:

int below code:
https://github.com/terrychenism/NeuralNetTests/blob/master/caffe_utils/gen_bn_inference_v2.py

    scale, bias, mean, var = [p.data.ravel() for p in weights.params[layer.name]]

I know this code need some modifies, and i can get the scale and bias from the Caffe's Scale Layer.

but in the Caffe's Batch normalize layer, I can get 3 parameters: mean, var, and the moving_average_fraction , my question is How to use the parameter moving_average_fraction during merge BN/Scale Layer to Conv layer(Do Absorb the BN parameters)? just ignore this parameter?

from pva-faster-rcnn.

zimenglan-sysu-512 avatar zimenglan-sysu-512 commented on July 26, 2024

@xiaoxiongli have you tested the performance before and after that?

from pva-faster-rcnn.

xiaoxiongli avatar xiaoxiongli commented on July 26, 2024

@zimenglan-sysu-512 mAP is same, before:110ms, after: 93ms

from pva-faster-rcnn.

hengck23 avatar hengck23 commented on July 26, 2024

@sanghoon
Thank you very much!

from pva-faster-rcnn.

maxenceliu avatar maxenceliu commented on July 26, 2024

@xiaoxiongli
Hi, I also implement the my conv+bn+scale merge code. The inference speed really increase, but not that significant fast like yours! which is about 16% faster than the previous one. 63ms -> 53ms. And the network looks like google inception v2.

from pva-faster-rcnn.

shuyu0815 avatar shuyu0815 commented on July 26, 2024

@sanghoon @hengck23 @xiaoxiongli @kyehyeon
Hi, I implement the conv+bn+scale merge code to own model. The inference speed really increase, but the output has significant shift !
Anyone can give me some suggestions ? Thanks !

By the way, I also try to use the parameters extracted by command net.params[“layer name"] in 00_classification (in caffe example) to imitate the forward of the batchnorm layer.
I use the net.params[“layer name”] to extract the bn_mean, bn_variance, num_bn_samples (moving average fraction) of batchnorm layer and use the following formula to get the output but the output is different to the output extracted by command net.blob[“conv”] (output after batch)

( conv_out - bn_mean / num_bn_samples ) / sqrt(bn_variance / num_bn_samples)

from pva-faster-rcnn.

zimenglan-sysu-512 avatar zimenglan-sysu-512 commented on July 26, 2024

hi @sanghoon i use your script to merge model, but i find that the output of merged one does not match original one.

hi @maxenceliu can you share your script?

thanks.

from pva-faster-rcnn.

pasxalinamed avatar pasxalinamed commented on July 26, 2024

After running ./tools/gen_merged_model.py , it executed correctly but the output model has as a result detections that make no sense! What did go wrong? Before that detections were fine
screenshot from 2017-11-27 13 46 53
screenshot from 2017-11-27 13 46 26

from pva-faster-rcnn.

PSlearner avatar PSlearner commented on July 26, 2024

anyone can provide some complete example code for tensorflow about how to merge conv layer and bn layer to one conv layer?

do you have some example code for tensorflow about how to merge CONV and BN

from pva-faster-rcnn.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.