Code Monkey home page Code Monkey logo

pva-faster-rcnn's People

Contributors

dectinc avatar drozdvadym avatar kevinkit avatar kyehyeon avatar rbgirshick avatar sanghoon avatar wangdelp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pva-faster-rcnn's Issues

How to run the file ./tools/demo.py

Hi@sanghoon:
I can train and test my own net with my own data by the models/example_train_384.Now I want to run the demo.py, but it has the error like this:
Traceback (most recent call last):
File "./tools/demo.py", line 136, in
_, _= im_detect(net, im)
TypeError: im_detect() takes at least 3 arguments (2 given)

I know that the im_detect() is defined in the lib/faste_rcnn/test.py file. How can I modify the code to run the demo.py successfully?
Hopeful for your reply! Thank you!

Question on how to train BN stably

Hi,

The original faster RCNN train with batch size=1 or 2.
But for BN (batch normalization) layer, we need sufficient statistic to correctly estimate the mean and variance parameters. How did you solve this problem?

Thank you very much.

Training loss 0

Hey @sanghoon I tried running the training script on VOC07-12 trainval data with original imagenet pretrained model. However I am seeing
I1007 00:45:18.758940 7782 solver.cpp:228] Iteration 40, loss = 0.41927
I1007 00:45:18.759007 7782 solver.cpp:244] Train net output #0: cls_loss = 0 (* 1 = 0 loss)
I1007 00:45:18.759021 7782 solver.cpp:244] Train net output #1: loss_bbox = 0 (* 1 = 0 loss)
I1007 00:45:18.759030 7782 solver.cpp:244] Train net output #2: rpn_cls_loss = 0.136419 (* 1 = 0.136419 loss)
I1007 00:45:18.759040 7782 solver.cpp:244] Train net output #3: rpn_loss_bbox = 0.083394 (* 1 = 0.083394 loss)
I1007 00:45:18.759050 7782 sgd_solver.cpp:138] Iteration 40, lr = 0.001
I1007 00:45:29.026012 7782 solver.cpp:228] Iteration 60, loss = 0.324707
I1007 00:45:29.026077 7782 solver.cpp:244] Train net output #0: cls_loss = 0 (* 1 = 0 loss)
I1007 00:45:29.026090 7782 solver.cpp:244] Train net output #1: loss_bbox = 0 (* 1 = 0 loss)
I1007 00:45:29.026103 7782 solver.cpp:244] Train net output #2: rpn_cls_loss = 0.161693 (* 1 = 0.161693 loss)
I1007 00:45:29.026111 7782 solver.cpp:244] Train net output #3: rpn_loss_bbox = 0.288804 (* 1 = 0.288804 loss)

cls_loss and loss_bbox=0. Do you have insight why would that be?

Thanks

Why using PowerLayer for identity mapping?

Hi @sanghoon
I notice that for short-cut connections, A PowerLayer is used for identity mapping(since both power and scale are set to 1) as follows:

layer {
name: "conv2_3/input"
type: "Power"
bottom: "conv2_2"
top: "conv2_3/input"
power_param {
power: 1
scale: 1
shift: 0
}
}
layer {
name: "conv2_3"
type: "Eltwise"
bottom: "conv2_3/3"
bottom: "conv2_3/input"
top: "conv2_3"
eltwise_param {
operation: SUM
coeff: 1
coeff: 1
}
}

I'm wondering if "conv2_3/input" is neccessary. Why not take "conv2_2" directly as bottom input for "conv2_3"?

MSCOCO mAP

Hi, this is not a code issue but I was curious if you have mAP numbers on COCO. The paper only has pascal VOC results.

About finetune my own data

@sanghoon hi,sorry to bother u,but I have some problem when I finetune my own data.My task is to detect the whale in the image.when I use the full/test.model and models/pvanet/example_finetune/solver.prototxt to finetune.I got a bad result(Why?).Then,I use example_train_384/ to finetune.The result is better,but the detection time is long,about 0.18s/img(gpu gtx1070).The example_train_384/test.prototxt has bn layers.Is it the reason?Can u give some advice about how to finetune my own data? Thank you in advance!

about the order of the ReLU Layer and Dropout Layer in the prototxt?

@sanghoon Dear sanghoon,

in the pvanet's training prototxt:

layer {
name: "drop6"
type: "Dropout"
bottom: "fc6"
top: "fc6"
dropout_param {
dropout_ratio: 0.5
}
}
layer {
name: "relu6"
type: "ReLU"
bottom: "fc6"
top: "fc6"
}

in the faster-rcnn's training prototxt:

layer {
name: "relu6"
type: "ReLU"
bottom: "fc6"
top: "fc6"
}
layer {
name: "drop6"
type: "Dropout"
bottom: "fc6"
top: "fc6"
dropout_param {
dropout_ratio: 0.5
}
}

I feel confuse that why in pvanet the relu layer is on the top of the dropout layer? ………… ^_^

Training pvanet-faster rcnn on VOC pascal data

I am trying to train Faster rcnn with pvanet on VOC pascal dataset using
https://github.com/sanghoon/pva-faster-rcnn. However I get dimension mismatch error for conv4_1/incep (Concat layer)
Below is a snippet of caffe log while trying to train.
I0923 18:58:11.487720 26638 net.cpp:157] Top shape: 1 48 38 63 (114912)
I0923 18:58:11.487735 26638 net.cpp:165] Memory required for data: 579115860
I0923 18:58:11.487751 26638 layer_factory.hpp:77] Creating layer conv4_1/incep/2_0/relu
I0923 18:58:11.487768 26638 net.cpp:100] Creating Layer conv4_1/incep/2_0/relu
I0923 18:58:11.487782 26638 net.cpp:434] conv4_1/incep/2_0/relu <- conv4_1/incep/2_0
I0923 18:58:11.487794 26638 net.cpp:395] conv4_1/incep/2_0/relu -> conv4_1/incep/2_0 (in-place)
I0923 18:58:11.487808 26638 net.cpp:150] Setting up conv4_1/incep/2_0/relu
I0923 18:58:11.487826 26638 net.cpp:157] Top shape: 1 48 38 63 (114912)
I0923 18:58:11.487836 26638 net.cpp:165] Memory required for data: 579575508
I0923 18:58:11.487851 26638 layer_factory.hpp:77] Creating layer conv4_1/incep/2_1/conv
I0923 18:58:11.487869 26638 net.cpp:100] Creating Layer conv4_1/incep/2_1/conv
I0923 18:58:11.487886 26638 net.cpp:434] conv4_1/incep/2_1/conv <- conv4_1/incep/2_0
I0923 18:58:11.487902 26638 net.cpp:408] conv4_1/incep/2_1/conv -> conv4_1/incep/2_1
I0923 18:58:11.488270 26638 net.cpp:150] Setting up conv4_1/incep/2_1/conv
I0923 18:58:11.488294 26638 net.cpp:157] Top shape: 1 48 38 63 (114912)
I0923 18:58:11.488306 26638 net.cpp:165] Memory required for data: 580035156
I0923 18:58:11.488320 26638 layer_factory.hpp:77] Creating layer conv4_1/incep/2_1/relu
I0923 18:58:11.488342 26638 net.cpp:100] Creating Layer conv4_1/incep/2_1/relu
I0923 18:58:11.488358 26638 net.cpp:434] conv4_1/incep/2_1/relu <- conv4_1/incep/2_1
I0923 18:58:11.488371 26638 net.cpp:395] conv4_1/incep/2_1/relu -> conv4_1/incep/2_1 (in-place)
I0923 18:58:11.488390 26638 net.cpp:150] Setting up conv4_1/incep/2_1/relu
I0923 18:58:11.488404 26638 net.cpp:157] Top shape: 1 48 38 63 (114912)
I0923 18:58:11.488415 26638 net.cpp:165] Memory required for data: 580494804
I0923 18:58:11.488425 26638 layer_factory.hpp:77] Creating layer conv4_1/incep/pool
I0923 18:58:11.488440 26638 net.cpp:100] Creating Layer conv4_1/incep/pool
I0923 18:58:11.488456 26638 net.cpp:434] conv4_1/incep/pool <- conv3_4_conv3_4_0_split_3
I0923 18:58:11.488471 26638 net.cpp:408] conv4_1/incep/pool -> conv4_1/incep/pool
I0923 18:58:11.488524 26638 net.cpp:150] Setting up conv4_1/incep/pool
I0923 18:58:11.488546 26638 net.cpp:157] Top shape: 1 128 37 62 (293632)
I0923 18:58:11.488559 26638 net.cpp:165] Memory required for data: 581669332
I0923 18:58:11.488574 26638 layer_factory.hpp:77] Creating layer conv4_1/incep/poolproj/conv
I0923 18:58:11.488592 26638 net.cpp:100] Creating Layer conv4_1/incep/poolproj/conv
I0923 18:58:11.488610 26638 net.cpp:434] conv4_1/incep/poolproj/conv <- conv4_1/incep/pool
I0923 18:58:11.488623 26638 net.cpp:408] conv4_1/incep/poolproj/conv -> conv4_1/incep/poolproj
I0923 18:58:11.488951 26638 net.cpp:150] Setting up conv4_1/incep/poolproj/conv
I0923 18:58:11.488976 26638 net.cpp:157] Top shape: 1 128 37 62 (293632)
I0923 18:58:11.488986 26638 net.cpp:165] Memory required for data: 582843860
I0923 18:58:11.489001 26638 layer_factory.hpp:77] Creating layer conv4_1/incep/poolproj/relu
I0923 18:58:11.489019 26638 net.cpp:100] Creating Layer conv4_1/incep/poolproj/relu
I0923 18:58:11.489032 26638 net.cpp:434] conv4_1/incep/poolproj/relu <- conv4_1/incep/poolproj
I0923 18:58:11.489045 26638 net.cpp:395] conv4_1/incep/poolproj/relu -> conv4_1/incep/poolproj (in-place)
I0923 18:58:11.489065 26638 net.cpp:150] Setting up conv4_1/incep/poolproj/relu
I0923 18:58:11.489079 26638 net.cpp:157] Top shape: 1 128 37 62 (293632)
I0923 18:58:11.489089 26638 net.cpp:165] Memory required for data: 584018388
I0923 18:58:11.489100 26638 layer_factory.hpp:77] Creating layer conv4_1/incep
I0923 18:58:11.489112 26638 net.cpp:100] Creating Layer conv4_1/incep
I0923 18:58:11.489140 26638 net.cpp:434] conv4_1/incep <- conv4_1/incep/0
I0923 18:58:11.489151 26638 net.cpp:434] conv4_1/incep <- conv4_1/incep/1_0
I0923 18:58:11.489168 26638 net.cpp:434] conv4_1/incep <- conv4_1/incep/2_1
I0923 18:58:11.489179 26638 net.cpp:434] conv4_1/incep <- conv4_1/incep/poolproj
I0923 18:58:11.489197 26638 net.cpp:408] conv4_1/incep -> conv4_1/incep
F0923 18:58:11.489228 26638 concat_layer.cpp:42] Check failed: top_shape[j] == bottom[i]->shape(j) (38 vs. 37) All inputs must have the same shape, except at concat_axis

Looks like the dimension of pooling layer and conv layer differ by 1. Any help would be appreciated.

Thank you

About base_size

I use the upsampling layer and unsamlping the hyper layer.So the base_size will be 8 ,and feat_stride will be 8.But it dose not work as I wzpwcting.This is My prototxt .What's wrong with it.
name: "PVANET-lite"

################################################################################

Input

################################################################################

layer {
name: 'input-data'
type: 'Python'
top: 'data'
top: 'im_info'
top: 'gt_boxes'
include { phase: TRAIN }
python_param {
module: 'roi_data_layer.layer'
layer: 'RoIDataLayer'
param_str: "'num_classes': 21"
}
}

#layer {

name: "input-data"

type: "DummyData"

top: "data"

top: "im_info"

include { phase: TEST }

dummy_data_param {

shape { dim: 1 dim: 3 dim: 640 dim: 1056 }

shape { dim: 1 dim: 4 }

}

#}

################################################################################

Conv 1

################################################################################
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32
kernel_size: 4 stride: 2 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "conv1/bn"
type: "BatchNorm"
bottom: "conv1"
top: "conv1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "conv1/scale"
type: "Scale"
bottom: "conv1"
top: "conv1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "relu1"
type: "ReLU"
bottom: "conv1"
top: "conv1"
}

################################################################################

Conv 2

################################################################################
layer {
name: "conv2"
type: "Convolution"
bottom: "conv1"
top: "conv2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 48
kernel_size: 3 stride: 2 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "conv2/bn"
type: "BatchNorm"
bottom: "conv2"
top: "conv2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "conv2/scale"
type: "Scale"
bottom: "conv2"
top: "conv2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "relu2"
type: "ReLU"
bottom: "conv2"
top: "conv2"
}

################################################################################

Conv 3

################################################################################
layer {
name: "conv3"
type: "Convolution"
bottom: "conv2"
top: "conv3"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 96
kernel_size: 3 stride: 2 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "conv3/bn"
type: "BatchNorm"
bottom: "conv3"
top: "conv3"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "conv3/scale"
type: "Scale"
bottom: "conv3"
top: "conv3"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "relu3"
type: "ReLU"
bottom: "conv3"
top: "conv3"
}

################################################################################

Inception 3a

################################################################################
layer {
name: "inc3a/pool1"
type: "Pooling"
bottom: "conv3"
top: "inc3a/pool1"
pooling_param {
kernel_size: 3 stride: 2 pad: 0
pool: MAX
}
}
layer {
name: "inc3a/conv1"
type: "Convolution"
bottom: "inc3a/pool1"
top: "inc3a/conv1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 96 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3a/conv1/bn"
type: "BatchNorm"
bottom: "inc3a/conv1"
top: "inc3a/conv1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3a/conv1/scale"
type: "Scale"
bottom: "inc3a/conv1"
top: "inc3a/conv1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3a/relu1"
type: "ReLU"
bottom: "inc3a/conv1"
top: "inc3a/conv1"
}
layer {
name: "inc3a/conv3_1"
type: "Convolution"
bottom: "conv3"
top: "inc3a/conv3_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 16 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3a/conv3_1/bn"
type: "BatchNorm"
bottom: "inc3a/conv3_1"
top: "inc3a/conv3_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3a/conv3_1/scale"
type: "Scale"
bottom: "inc3a/conv3_1"
top: "inc3a/conv3_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3a/relu3_1"
type: "ReLU"
bottom: "inc3a/conv3_1"
top: "inc3a/conv3_1"
}
layer {
name: "inc3a/conv3_2"
type: "Convolution"
bottom: "inc3a/conv3_1"
top: "inc3a/conv3_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 64 kernel_size: 3 stride: 2 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3a/conv3_2/bn"
type: "BatchNorm"
bottom: "inc3a/conv3_2"
top: "inc3a/conv3_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3a/conv3_2/scale"
type: "Scale"
bottom: "inc3a/conv3_2"
top: "inc3a/conv3_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3a/relu3_2"
type: "ReLU"
bottom: "inc3a/conv3_2"
top: "inc3a/conv3_2"
}
layer {
name: "inc3a/conv5_1"
type: "Convolution"
bottom: "conv3"
top: "inc3a/conv5_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 16 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3a/conv5_1/bn"
type: "BatchNorm"
bottom: "inc3a/conv5_1"
top: "inc3a/conv5_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3a/conv5_1/scale"
type: "Scale"
bottom: "inc3a/conv5_1"
top: "inc3a/conv5_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3a/relu5_1"
type: "ReLU"
bottom: "inc3a/conv5_1"
top: "inc3a/conv5_1"
}
layer {
name: "inc3a/conv5_2"
type: "Convolution"
bottom: "inc3a/conv5_1"
top: "inc3a/conv5_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3a/conv5_2/bn"
type: "BatchNorm"
bottom: "inc3a/conv5_2"
top: "inc3a/conv5_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3a/conv5_2/scale"
type: "Scale"
bottom: "inc3a/conv5_2"
top: "inc3a/conv5_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3a/relu5_2"
type: "ReLU"
bottom: "inc3a/conv5_2"
top: "inc3a/conv5_2"
}
layer {
name: "inc3a/conv5_3"
type: "Convolution"
bottom: "inc3a/conv5_2"
top: "inc3a/conv5_3"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 stride: 2 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3a/conv5_3/bn"
type: "BatchNorm"
bottom: "inc3a/conv5_3"
top: "inc3a/conv5_3"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3a/conv5_3/scale"
type: "Scale"
bottom: "inc3a/conv5_3"
top: "inc3a/conv5_3"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3a/relu5_3"
type: "ReLU"
bottom: "inc3a/conv5_3"
top: "inc3a/conv5_3"
}
layer {
name: "inc3a"
type: "Concat"
bottom: "inc3a/conv1"
bottom: "inc3a/conv3_2"
bottom: "inc3a/conv5_3"
top: "inc3a"
}

################################################################################

Inception 3b

################################################################################
layer {
name: "inc3b/conv1"
type: "Convolution"
bottom: "inc3a"
top: "inc3b/conv1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 96 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3b/conv1/bn"
type: "BatchNorm"
bottom: "inc3b/conv1"
top: "inc3b/conv1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3b/conv1/scale"
type: "Scale"
bottom: "inc3b/conv1"
top: "inc3b/conv1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3b/relu1"
type: "ReLU"
bottom: "inc3b/conv1"
top: "inc3b/conv1"
}
layer {
name: "inc3b/conv3_1"
type: "Convolution"
bottom: "inc3a"
top: "inc3b/conv3_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 16 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3b/conv3_1/bn"
type: "BatchNorm"
bottom: "inc3b/conv3_1"
top: "inc3b/conv3_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3b/conv3_1/scale"
type: "Scale"
bottom: "inc3b/conv3_1"
top: "inc3b/conv3_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3b/relu3_1"
type: "ReLU"
bottom: "inc3b/conv3_1"
top: "inc3b/conv3_1"
}
layer {
name: "inc3b/conv3_2"
type: "Convolution"
bottom: "inc3b/conv3_1"
top: "inc3b/conv3_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 64 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3b/conv3_2/bn"
type: "BatchNorm"
bottom: "inc3b/conv3_2"
top: "inc3b/conv3_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3b/conv3_2/scale"
type: "Scale"
bottom: "inc3b/conv3_2"
top: "inc3b/conv3_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3b/relu3_2"
type: "ReLU"
bottom: "inc3b/conv3_2"
top: "inc3b/conv3_2"
}
layer {
name: "inc3b/conv5_1"
type: "Convolution"
bottom: "inc3a"
top: "inc3b/conv5_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 16 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3b/conv5_1/bn"
type: "BatchNorm"
bottom: "inc3b/conv5_1"
top: "inc3b/conv5_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3b/conv5_1/scale"
type: "Scale"
bottom: "inc3b/conv5_1"
top: "inc3b/conv5_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3b/relu5_1"
type: "ReLU"
bottom: "inc3b/conv5_1"
top: "inc3b/conv5_1"
}
layer {
name: "inc3b/conv5_2"
type: "Convolution"
bottom: "inc3b/conv5_1"
top: "inc3b/conv5_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3b/conv5_2/bn"
type: "BatchNorm"
bottom: "inc3b/conv5_2"
top: "inc3b/conv5_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3b/conv5_2/scale"
type: "Scale"
bottom: "inc3b/conv5_2"
top: "inc3b/conv5_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3b/relu5_2"
type: "ReLU"
bottom: "inc3b/conv5_2"
top: "inc3b/conv5_2"
}
layer {
name: "inc3b/conv5_3"
type: "Convolution"
bottom: "inc3b/conv5_2"
top: "inc3b/conv5_3"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3b/conv5_3/bn"
type: "BatchNorm"
bottom: "inc3b/conv5_3"
top: "inc3b/conv5_3"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3b/conv5_3/scale"
type: "Scale"
bottom: "inc3b/conv5_3"
top: "inc3b/conv5_3"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3b/relu5_3"
type: "ReLU"
bottom: "inc3b/conv5_3"
top: "inc3b/conv5_3"
}
layer {
name: "inc3b"
type: "Concat"
bottom: "inc3b/conv1"
bottom: "inc3b/conv3_2"
bottom: "inc3b/conv5_3"
top: "inc3b"
}

################################################################################

Inception 3c

################################################################################
layer {
name: "inc3c/conv1"
type: "Convolution"
bottom: "inc3b"
top: "inc3c/conv1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 96 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3c/conv1/bn"
type: "BatchNorm"
bottom: "inc3c/conv1"
top: "inc3c/conv1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3c/conv1/scale"
type: "Scale"
bottom: "inc3c/conv1"
top: "inc3c/conv1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3c/relu1"
type: "ReLU"
bottom: "inc3c/conv1"
top: "inc3c/conv1"
}
layer {
name: "inc3c/conv3_1"
type: "Convolution"
bottom: "inc3b"
top: "inc3c/conv3_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 16 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3c/conv3_1/bn"
type: "BatchNorm"
bottom: "inc3c/conv3_1"
top: "inc3c/conv3_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3c/conv3_1/scale"
type: "Scale"
bottom: "inc3c/conv3_1"
top: "inc3c/conv3_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3c/relu3_1"
type: "ReLU"
bottom: "inc3c/conv3_1"
top: "inc3c/conv3_1"
}
layer {
name: "inc3c/conv3_2"
type: "Convolution"
bottom: "inc3c/conv3_1"
top: "inc3c/conv3_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 64 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3c/conv3_2/bn"
type: "BatchNorm"
bottom: "inc3c/conv3_2"
top: "inc3c/conv3_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3c/conv3_2/scale"
type: "Scale"
bottom: "inc3c/conv3_2"
top: "inc3c/conv3_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3c/relu3_2"
type: "ReLU"
bottom: "inc3c/conv3_2"
top: "inc3c/conv3_2"
}
layer {
name: "inc3c/conv5_1"
type: "Convolution"
bottom: "inc3b"
top: "inc3c/conv5_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 16 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3c/conv5_1/bn"
type: "BatchNorm"
bottom: "inc3c/conv5_1"
top: "inc3c/conv5_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3c/conv5_1/scale"
type: "Scale"
bottom: "inc3c/conv5_1"
top: "inc3c/conv5_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3c/relu5_1"
type: "ReLU"
bottom: "inc3c/conv5_1"
top: "inc3c/conv5_1"
}
layer {
name: "inc3c/conv5_2"
type: "Convolution"
bottom: "inc3c/conv5_1"
top: "inc3c/conv5_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3c/conv5_2/bn"
type: "BatchNorm"
bottom: "inc3c/conv5_2"
top: "inc3c/conv5_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3c/conv5_2/scale"
type: "Scale"
bottom: "inc3c/conv5_2"
top: "inc3c/conv5_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3c/relu5_2"
type: "ReLU"
bottom: "inc3c/conv5_2"
top: "inc3c/conv5_2"
}
layer {
name: "inc3c/conv5_3"
type: "Convolution"
bottom: "inc3c/conv5_2"
top: "inc3c/conv5_3"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3c/conv5_3/bn"
type: "BatchNorm"
bottom: "inc3c/conv5_3"
top: "inc3c/conv5_3"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3c/conv5_3/scale"
type: "Scale"
bottom: "inc3c/conv5_3"
top: "inc3c/conv5_3"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3c/relu5_3"
type: "ReLU"
bottom: "inc3c/conv5_3"
top: "inc3c/conv5_3"
}
layer {
name: "inc3c"
type: "Concat"
bottom: "inc3c/conv1"
bottom: "inc3c/conv3_2"
bottom: "inc3c/conv5_3"
top: "inc3c"
}

################################################################################

Inception 3d

################################################################################
layer {
name: "inc3d/conv1"
type: "Convolution"
bottom: "inc3c"
top: "inc3d/conv1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 96 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3d/conv1/bn"
type: "BatchNorm"
bottom: "inc3d/conv1"
top: "inc3d/conv1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3d/conv1/scale"
type: "Scale"
bottom: "inc3d/conv1"
top: "inc3d/conv1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3d/relu1"
type: "ReLU"
bottom: "inc3d/conv1"
top: "inc3d/conv1"
}
layer {
name: "inc3d/conv3_1"
type: "Convolution"
bottom: "inc3c"
top: "inc3d/conv3_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 16 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3d/conv3_1/bn"
type: "BatchNorm"
bottom: "inc3d/conv3_1"
top: "inc3d/conv3_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3d/conv3_1/scale"
type: "Scale"
bottom: "inc3d/conv3_1"
top: "inc3d/conv3_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3d/relu3_1"
type: "ReLU"
bottom: "inc3d/conv3_1"
top: "inc3d/conv3_1"
}
layer {
name: "inc3d/conv3_2"
type: "Convolution"
bottom: "inc3d/conv3_1"
top: "inc3d/conv3_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 64 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3d/conv3_2/bn"
type: "BatchNorm"
bottom: "inc3d/conv3_2"
top: "inc3d/conv3_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3d/conv3_2/scale"
type: "Scale"
bottom: "inc3d/conv3_2"
top: "inc3d/conv3_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3d/relu3_2"
type: "ReLU"
bottom: "inc3d/conv3_2"
top: "inc3d/conv3_2"
}
layer {
name: "inc3d/conv5_1"
type: "Convolution"
bottom: "inc3c"
top: "inc3d/conv5_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 16 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3d/conv5_1/bn"
type: "BatchNorm"
bottom: "inc3d/conv5_1"
top: "inc3d/conv5_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3d/conv5_1/scale"
type: "Scale"
bottom: "inc3d/conv5_1"
top: "inc3d/conv5_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3d/relu5_1"
type: "ReLU"
bottom: "inc3d/conv5_1"
top: "inc3d/conv5_1"
}
layer {
name: "inc3d/conv5_2"
type: "Convolution"
bottom: "inc3d/conv5_1"
top: "inc3d/conv5_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3d/conv5_2/bn"
type: "BatchNorm"
bottom: "inc3d/conv5_2"
top: "inc3d/conv5_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3d/conv5_2/scale"
type: "Scale"
bottom: "inc3d/conv5_2"
top: "inc3d/conv5_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3d/relu5_2"
type: "ReLU"
bottom: "inc3d/conv5_2"
top: "inc3d/conv5_2"
}
layer {
name: "inc3d/conv5_3"
type: "Convolution"
bottom: "inc3d/conv5_2"
top: "inc3d/conv5_3"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3d/conv5_3/bn"
type: "BatchNorm"
bottom: "inc3d/conv5_3"
top: "inc3d/conv5_3"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3d/conv5_3/scale"
type: "Scale"
bottom: "inc3d/conv5_3"
top: "inc3d/conv5_3"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3d/relu5_3"
type: "ReLU"
bottom: "inc3d/conv5_3"
top: "inc3d/conv5_3"
}
layer {
name: "inc3d"
type: "Concat"
bottom: "inc3d/conv1"
bottom: "inc3d/conv3_2"
bottom: "inc3d/conv5_3"
top: "inc3d"
}

################################################################################

Inception 3e

################################################################################
layer {
name: "inc3e/conv1"
type: "Convolution"
bottom: "inc3d"
top: "inc3e/conv1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 96 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3e/conv1/bn"
type: "BatchNorm"
bottom: "inc3e/conv1"
top: "inc3e/conv1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3e/conv1/scale"
type: "Scale"
bottom: "inc3e/conv1"
top: "inc3e/conv1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3e/relu1"
type: "ReLU"
bottom: "inc3e/conv1"
top: "inc3e/conv1"
}
layer {
name: "inc3e/conv3_1"
type: "Convolution"
bottom: "inc3d"
top: "inc3e/conv3_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 16 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3e/conv3_1/bn"
type: "BatchNorm"
bottom: "inc3e/conv3_1"
top: "inc3e/conv3_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3e/conv3_1/scale"
type: "Scale"
bottom: "inc3e/conv3_1"
top: "inc3e/conv3_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3e/relu3_1"
type: "ReLU"
bottom: "inc3e/conv3_1"
top: "inc3e/conv3_1"
}
layer {
name: "inc3e/conv3_2"
type: "Convolution"
bottom: "inc3e/conv3_1"
top: "inc3e/conv3_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 64 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3e/conv3_2/bn"
type: "BatchNorm"
bottom: "inc3e/conv3_2"
top: "inc3e/conv3_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3e/conv3_2/scale"
type: "Scale"
bottom: "inc3e/conv3_2"
top: "inc3e/conv3_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3e/relu3_2"
type: "ReLU"
bottom: "inc3e/conv3_2"
top: "inc3e/conv3_2"
}
layer {
name: "inc3e/conv5_1"
type: "Convolution"
bottom: "inc3d"
top: "inc3e/conv5_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 16 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3e/conv5_1/bn"
type: "BatchNorm"
bottom: "inc3e/conv5_1"
top: "inc3e/conv5_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3e/conv5_1/scale"
type: "Scale"
bottom: "inc3e/conv5_1"
top: "inc3e/conv5_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3e/relu5_1"
type: "ReLU"
bottom: "inc3e/conv5_1"
top: "inc3e/conv5_1"
}
layer {
name: "inc3e/conv5_2"
type: "Convolution"
bottom: "inc3e/conv5_1"
top: "inc3e/conv5_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3e/conv5_2/bn"
type: "BatchNorm"
bottom: "inc3e/conv5_2"
top: "inc3e/conv5_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3e/conv5_2/scale"
type: "Scale"
bottom: "inc3e/conv5_2"
top: "inc3e/conv5_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3e/relu5_2"
type: "ReLU"
bottom: "inc3e/conv5_2"
top: "inc3e/conv5_2"
}
layer {
name: "inc3e/conv5_3"
type: "Convolution"
bottom: "inc3e/conv5_2"
top: "inc3e/conv5_3"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3e/conv5_3/bn"
type: "BatchNorm"
bottom: "inc3e/conv5_3"
top: "inc3e/conv5_3"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3e/conv5_3/scale"
type: "Scale"
bottom: "inc3e/conv5_3"
top: "inc3e/conv5_3"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3e/relu5_3"
type: "ReLU"
bottom: "inc3e/conv5_3"
top: "inc3e/conv5_3"
}
layer {
name: "inc3e"
type: "Concat"
bottom: "inc3e/conv1"
bottom: "inc3e/conv3_2"
bottom: "inc3e/conv5_3"
top: "inc3e"
}

################################################################################

Inception 4a

################################################################################
layer {
name: "inc4a/pool1"
type: "Pooling"
bottom: "inc3e"
top: "inc4a/pool1"
pooling_param {
kernel_size: 3 stride: 2 pad: 0
pool: MAX
}
}
layer {
name: "inc4a/conv1"
type: "Convolution"
bottom: "inc4a/pool1"
top: "inc4a/conv1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 128 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4a/conv1/bn"
type: "BatchNorm"
bottom: "inc4a/conv1"
top: "inc4a/conv1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4a/conv1/scale"
type: "Scale"
bottom: "inc4a/conv1"
top: "inc4a/conv1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4a/relu1"
type: "ReLU"
bottom: "inc4a/conv1"
top: "inc4a/conv1"
}
layer {
name: "inc4a/conv3_1"
type: "Convolution"
bottom: "inc3e"
top: "inc4a/conv3_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4a/conv3_1/bn"
type: "BatchNorm"
bottom: "inc4a/conv3_1"
top: "inc4a/conv3_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4a/conv3_1/scale"
type: "Scale"
bottom: "inc4a/conv3_1"
top: "inc4a/conv3_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4a/relu3_1"
type: "ReLU"
bottom: "inc4a/conv3_1"
top: "inc4a/conv3_1"
}
layer {
name: "inc4a/conv3_2"
type: "Convolution"
bottom: "inc4a/conv3_1"
top: "inc4a/conv3_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 96 kernel_size: 3 stride: 2 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4a/conv3_2/bn"
type: "BatchNorm"
bottom: "inc4a/conv3_2"
top: "inc4a/conv3_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4a/conv3_2/scale"
type: "Scale"
bottom: "inc4a/conv3_2"
top: "inc4a/conv3_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4a/relu3_2"
type: "ReLU"
bottom: "inc4a/conv3_2"
top: "inc4a/conv3_2"
}
layer {
name: "inc4a/conv5_1"
type: "Convolution"
bottom: "inc3e"
top: "inc4a/conv5_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 16 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4a/conv5_1/bn"
type: "BatchNorm"
bottom: "inc4a/conv5_1"
top: "inc4a/conv5_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4a/conv5_1/scale"
type: "Scale"
bottom: "inc4a/conv5_1"
top: "inc4a/conv5_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4a/relu5_1"
type: "ReLU"
bottom: "inc4a/conv5_1"
top: "inc4a/conv5_1"
}
layer {
name: "inc4a/conv5_2"
type: "Convolution"
bottom: "inc4a/conv5_1"
top: "inc4a/conv5_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4a/conv5_2/bn"
type: "BatchNorm"
bottom: "inc4a/conv5_2"
top: "inc4a/conv5_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4a/conv5_2/scale"
type: "Scale"
bottom: "inc4a/conv5_2"
top: "inc4a/conv5_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4a/relu5_2"
type: "ReLU"
bottom: "inc4a/conv5_2"
top: "inc4a/conv5_2"
}
layer {
name: "inc4a/conv5_3"
type: "Convolution"
bottom: "inc4a/conv5_2"
top: "inc4a/conv5_3"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 stride: 2 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4a/conv5_3/bn"
type: "BatchNorm"
bottom: "inc4a/conv5_3"
top: "inc4a/conv5_3"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4a/conv5_3/scale"
type: "Scale"
bottom: "inc4a/conv5_3"
top: "inc4a/conv5_3"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4a/relu5_3"
type: "ReLU"
bottom: "inc4a/conv5_3"
top: "inc4a/conv5_3"
}
layer {
name: "inc4a"
type: "Concat"
bottom: "inc4a/conv1"
bottom: "inc4a/conv3_2"
bottom: "inc4a/conv5_3"
top: "inc4a"
}

################################################################################

Inception 4b

################################################################################
layer {
name: "inc4b/conv1"
type: "Convolution"
bottom: "inc4a"
top: "inc4b/conv1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 128 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4b/conv1/bn"
type: "BatchNorm"
bottom: "inc4b/conv1"
top: "inc4b/conv1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4b/conv1/scale"
type: "Scale"
bottom: "inc4b/conv1"
top: "inc4b/conv1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4b/relu1"
type: "ReLU"
bottom: "inc4b/conv1"
top: "inc4b/conv1"
}
layer {
name: "inc4b/conv3_1"
type: "Convolution"
bottom: "inc4a"
top: "inc4b/conv3_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4b/conv3_1/bn"
type: "BatchNorm"
bottom: "inc4b/conv3_1"
top: "inc4b/conv3_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4b/conv3_1/scale"
type: "Scale"
bottom: "inc4b/conv3_1"
top: "inc4b/conv3_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4b/relu3_1"
type: "ReLU"
bottom: "inc4b/conv3_1"
top: "inc4b/conv3_1"
}
layer {
name: "inc4b/conv3_2"
type: "Convolution"
bottom: "inc4b/conv3_1"
top: "inc4b/conv3_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 96 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4b/conv3_2/bn"
type: "BatchNorm"
bottom: "inc4b/conv3_2"
top: "inc4b/conv3_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4b/conv3_2/scale"
type: "Scale"
bottom: "inc4b/conv3_2"
top: "inc4b/conv3_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4b/relu3_2"
type: "ReLU"
bottom: "inc4b/conv3_2"
top: "inc4b/conv3_2"
}
layer {
name: "inc4b/conv5_1"
type: "Convolution"
bottom: "inc4a"
top: "inc4b/conv5_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 16 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4b/conv5_1/bn"
type: "BatchNorm"
bottom: "inc4b/conv5_1"
top: "inc4b/conv5_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4b/conv5_1/scale"
type: "Scale"
bottom: "inc4b/conv5_1"
top: "inc4b/conv5_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4b/relu5_1"
type: "ReLU"
bottom: "inc4b/conv5_1"
top: "inc4b/conv5_1"
}
layer {
name: "inc4b/conv5_2"
type: "Convolution"
bottom: "inc4b/conv5_1"
top: "inc4b/conv5_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4b/conv5_2/bn"
type: "BatchNorm"
bottom: "inc4b/conv5_2"
top: "inc4b/conv5_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4b/conv5_2/scale"
type: "Scale"
bottom: "inc4b/conv5_2"
top: "inc4b/conv5_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4b/relu5_2"
type: "ReLU"
bottom: "inc4b/conv5_2"
top: "inc4b/conv5_2"
}
layer {
name: "inc4b/conv5_3"
type: "Convolution"
bottom: "inc4b/conv5_2"
top: "inc4b/conv5_3"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4b/conv5_3/bn"
type: "BatchNorm"
bottom: "inc4b/conv5_3"
top: "inc4b/conv5_3"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4b/conv5_3/scale"
type: "Scale"
bottom: "inc4b/conv5_3"
top: "inc4b/conv5_3"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4b/relu5_3"
type: "ReLU"
bottom: "inc4b/conv5_3"
top: "inc4b/conv5_3"
}
layer {
name: "inc4b"
type: "Concat"
bottom: "inc4b/conv1"
bottom: "inc4b/conv3_2"
bottom: "inc4b/conv5_3"
top: "inc4b"
}

################################################################################

Inception 4c

################################################################################
layer {
name: "inc4c/conv1"
type: "Convolution"
bottom: "inc4b"
top: "inc4c/conv1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 128 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4c/conv1/bn"
type: "BatchNorm"
bottom: "inc4c/conv1"
top: "inc4c/conv1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4c/conv1/scale"
type: "Scale"
bottom: "inc4c/conv1"
top: "inc4c/conv1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4c/relu1"
type: "ReLU"
bottom: "inc4c/conv1"
top: "inc4c/conv1"
}
layer {
name: "inc4c/conv3_1"
type: "Convolution"
bottom: "inc4b"
top: "inc4c/conv3_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4c/conv3_1/bn"
type: "BatchNorm"
bottom: "inc4c/conv3_1"
top: "inc4c/conv3_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4c/conv3_1/scale"
type: "Scale"
bottom: "inc4c/conv3_1"
top: "inc4c/conv3_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4c/relu3_1"
type: "ReLU"
bottom: "inc4c/conv3_1"
top: "inc4c/conv3_1"
}
layer {
name: "inc4c/conv3_2"
type: "Convolution"
bottom: "inc4c/conv3_1"
top: "inc4c/conv3_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 96 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4c/conv3_2/bn"
type: "BatchNorm"
bottom: "inc4c/conv3_2"
top: "inc4c/conv3_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4c/conv3_2/scale"
type: "Scale"
bottom: "inc4c/conv3_2"
top: "inc4c/conv3_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4c/relu3_2"
type: "ReLU"
bottom: "inc4c/conv3_2"
top: "inc4c/conv3_2"
}
layer {
name: "inc4c/conv5_1"
type: "Convolution"
bottom: "inc4b"
top: "inc4c/conv5_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 16 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4c/conv5_1/bn"
type: "BatchNorm"
bottom: "inc4c/conv5_1"
top: "inc4c/conv5_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4c/conv5_1/scale"
type: "Scale"
bottom: "inc4c/conv5_1"
top: "inc4c/conv5_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4c/relu5_1"
type: "ReLU"
bottom: "inc4c/conv5_1"
top: "inc4c/conv5_1"
}
layer {
name: "inc4c/conv5_2"
type: "Convolution"
bottom: "inc4c/conv5_1"
top: "inc4c/conv5_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4c/conv5_2/bn"
type: "BatchNorm"
bottom: "inc4c/conv5_2"
top: "inc4c/conv5_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4c/conv5_2/scale"
type: "Scale"
bottom: "inc4c/conv5_2"
top: "inc4c/conv5_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4c/relu5_2"
type: "ReLU"
bottom: "inc4c/conv5_2"
top: "inc4c/conv5_2"
}
layer {
name: "inc4c/conv5_3"
type: "Convolution"
bottom: "inc4c/conv5_2"
top: "inc4c/conv5_3"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4c/conv5_3/bn"
type: "BatchNorm"
bottom: "inc4c/conv5_3"
top: "inc4c/conv5_3"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4c/conv5_3/scale"
type: "Scale"
bottom: "inc4c/conv5_3"
top: "inc4c/conv5_3"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4c/relu5_3"
type: "ReLU"
bottom: "inc4c/conv5_3"
top: "inc4c/conv5_3"
}
layer {
name: "inc4c"
type: "Concat"
bottom: "inc4c/conv1"
bottom: "inc4c/conv3_2"
bottom: "inc4c/conv5_3"
top: "inc4c"
}

################################################################################

Inception 4d

################################################################################
layer {
name: "inc4d/conv1"
type: "Convolution"
bottom: "inc4c"
top: "inc4d/conv1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 128 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4d/conv1/bn"
type: "BatchNorm"
bottom: "inc4d/conv1"
top: "inc4d/conv1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4d/conv1/scale"
type: "Scale"
bottom: "inc4d/conv1"
top: "inc4d/conv1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4d/relu1"
type: "ReLU"
bottom: "inc4d/conv1"
top: "inc4d/conv1"
}
layer {
name: "inc4d/conv3_1"
type: "Convolution"
bottom: "inc4c"
top: "inc4d/conv3_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4d/conv3_1/bn"
type: "BatchNorm"
bottom: "inc4d/conv3_1"
top: "inc4d/conv3_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4d/conv3_1/scale"
type: "Scale"
bottom: "inc4d/conv3_1"
top: "inc4d/conv3_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4d/relu3_1"
type: "ReLU"
bottom: "inc4d/conv3_1"
top: "inc4d/conv3_1"
}
layer {
name: "inc4d/conv3_2"
type: "Convolution"
bottom: "inc4d/conv3_1"
top: "inc4d/conv3_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 96 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4d/conv3_2/bn"
type: "BatchNorm"
bottom: "inc4d/conv3_2"
top: "inc4d/conv3_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4d/conv3_2/scale"
type: "Scale"
bottom: "inc4d/conv3_2"
top: "inc4d/conv3_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4d/relu3_2"
type: "ReLU"
bottom: "inc4d/conv3_2"
top: "inc4d/conv3_2"
}
layer {
name: "inc4d/conv5_1"
type: "Convolution"
bottom: "inc4c"
top: "inc4d/conv5_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 16 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4d/conv5_1/bn"
type: "BatchNorm"
bottom: "inc4d/conv5_1"
top: "inc4d/conv5_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4d/conv5_1/scale"
type: "Scale"
bottom: "inc4d/conv5_1"
top: "inc4d/conv5_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4d/relu5_1"
type: "ReLU"
bottom: "inc4d/conv5_1"
top: "inc4d/conv5_1"
}
layer {
name: "inc4d/conv5_2"
type: "Convolution"
bottom: "inc4d/conv5_1"
top: "inc4d/conv5_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4d/conv5_2/bn"
type: "BatchNorm"
bottom: "inc4d/conv5_2"
top: "inc4d/conv5_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4d/conv5_2/scale"
type: "Scale"
bottom: "inc4d/conv5_2"
top: "inc4d/conv5_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4d/relu5_2"
type: "ReLU"
bottom: "inc4d/conv5_2"
top: "inc4d/conv5_2"
}
layer {
name: "inc4d/conv5_3"
type: "Convolution"
bottom: "inc4d/conv5_2"
top: "inc4d/conv5_3"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4d/conv5_3/bn"
type: "BatchNorm"
bottom: "inc4d/conv5_3"
top: "inc4d/conv5_3"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4d/conv5_3/scale"
type: "Scale"
bottom: "inc4d/conv5_3"
top: "inc4d/conv5_3"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4d/relu5_3"
type: "ReLU"
bottom: "inc4d/conv5_3"
top: "inc4d/conv5_3"
}
layer {
name: "inc4d"
type: "Concat"
bottom: "inc4d/conv1"
bottom: "inc4d/conv3_2"
bottom: "inc4d/conv5_3"
top: "inc4d"
}

################################################################################

Inception 4e

################################################################################
layer {
name: "inc4e/conv1"
type: "Convolution"
bottom: "inc4d"
top: "inc4e/conv1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 128 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4e/conv1/bn"
type: "BatchNorm"
bottom: "inc4e/conv1"
top: "inc4e/conv1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4e/conv1/scale"
type: "Scale"
bottom: "inc4e/conv1"
top: "inc4e/conv1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4e/relu1"
type: "ReLU"
bottom: "inc4e/conv1"
top: "inc4e/conv1"
}
layer {
name: "inc4e/conv3_1"
type: "Convolution"
bottom: "inc4d"
top: "inc4e/conv3_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4e/conv3_1/bn"
type: "BatchNorm"
bottom: "inc4e/conv3_1"
top: "inc4e/conv3_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4e/conv3_1/scale"
type: "Scale"
bottom: "inc4e/conv3_1"
top: "inc4e/conv3_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4e/relu3_1"
type: "ReLU"
bottom: "inc4e/conv3_1"
top: "inc4e/conv3_1"
}
layer {
name: "inc4e/conv3_2"
type: "Convolution"
bottom: "inc4e/conv3_1"
top: "inc4e/conv3_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 96 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4e/conv3_2/bn"
type: "BatchNorm"
bottom: "inc4e/conv3_2"
top: "inc4e/conv3_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4e/conv3_2/scale"
type: "Scale"
bottom: "inc4e/conv3_2"
top: "inc4e/conv3_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4e/relu3_2"
type: "ReLU"
bottom: "inc4e/conv3_2"
top: "inc4e/conv3_2"
}
layer {
name: "inc4e/conv5_1"
type: "Convolution"
bottom: "inc4d"
top: "inc4e/conv5_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 16 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4e/conv5_1/bn"
type: "BatchNorm"
bottom: "inc4e/conv5_1"
top: "inc4e/conv5_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4e/conv5_1/scale"
type: "Scale"
bottom: "inc4e/conv5_1"
top: "inc4e/conv5_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4e/relu5_1"
type: "ReLU"
bottom: "inc4e/conv5_1"
top: "inc4e/conv5_1"
}
layer {
name: "inc4e/conv5_2"
type: "Convolution"
bottom: "inc4e/conv5_1"
top: "inc4e/conv5_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4e/conv5_2/bn"
type: "BatchNorm"
bottom: "inc4e/conv5_2"
top: "inc4e/conv5_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4e/conv5_2/scale"
type: "Scale"
bottom: "inc4e/conv5_2"
top: "inc4e/conv5_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4e/relu5_2"
type: "ReLU"
bottom: "inc4e/conv5_2"
top: "inc4e/conv5_2"
}
layer {
name: "inc4e/conv5_3"
type: "Convolution"
bottom: "inc4e/conv5_2"
top: "inc4e/conv5_3"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4e/conv5_3/bn"
type: "BatchNorm"
bottom: "inc4e/conv5_3"
top: "inc4e/conv5_3"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4e/conv5_3/scale"
type: "Scale"
bottom: "inc4e/conv5_3"
top: "inc4e/conv5_3"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4e/relu5_3"
type: "ReLU"
bottom: "inc4e/conv5_3"
top: "inc4e/conv5_3"
}
layer {
name: "inc4e"
type: "Concat"
bottom: "inc4e/conv1"
bottom: "inc4e/conv3_2"
bottom: "inc4e/conv5_3"
top: "inc4e"
}

################################################################################

hyper feature

################################################################################
layer {
name: "downsample"
type: "Pooling"
bottom: "conv3"
top: "downsample"
pooling_param {
kernel_size: 3 stride: 2 pad: 0
pool: MAX
}
}
layer {
name: "upsample"
type: "Deconvolution"
bottom: "inc4e"
top: "upsample"
param { lr_mult: 0 decay_mult: 0 }
convolution_param {
num_output: 256
kernel_size: 4 stride: 2 pad: 1
group: 256
weight_filler: { type: "bilinear" }
bias_term: false
}
}
layer {
name: "concat"
type: "Concat"
bottom: "downsample"
bottom: "inc3e"
bottom: "upsample"
top: "concat"
concat_param { axis: 1 }
}
layer {
name: "convf"
type: "Convolution"
bottom: "concat"
top: "convf"
param { lr_mult: 1 decay_mult: 1 }
param { lr_mult: 2 decay_mult: 0 }
convolution_param {
num_output: 256
kernel_size: 1 stride: 1 pad: 0
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "reluf"
type: "ReLU"
bottom: "convf"
top: "convf"
}
layer {
name: "upsample-2x"
type: "Deconvolution"
bottom: "convf"
top: "upsample-2x"
param { lr_mult: 0 decay_mult: 0 }
convolution_param {
num_output: 256
kernel_size: 4 stride: 2 pad: 1
group: 256
weight_filler: { type: "bilinear" }
bias_term: false
}
}

################################################################################

RPN

################################################################################

RPN

layer {
name: "rpn_conv1"
type: "Convolution"
bottom: "upsample-2x"
top: "rpn_conv1"
param { lr_mult: 1 decay_mult: 1 }
param { lr_mult: 2 decay_mult: 0 }
convolution_param {
num_output: 256
kernel_size: 1 stride: 1 pad: 0
weight_filler { type: "gaussian" std: 0.01 }
bias_filler { type: "constant" value: 0 }
}
}
layer {
name: "rpn_relu1"
type: "ReLU"
bottom: "rpn_conv1"
top: "rpn_conv1"
}

layer {
name: "rpn_cls_score-v2"
type: "Convolution"
bottom: "rpn_conv1"
top: "rpn_cls_score"
param { lr_mult: 1 decay_mult: 1 }
param { lr_mult: 2 decay_mult: 0 }
convolution_param {
num_output: 50
#num_output: 72
kernel_size: 1 stride: 1 pad: 0
weight_filler { type: "gaussian" std: 0.01 }
bias_filler { type: "constant" value: 0 }
}
}
layer {
name: "rpn_bbox_pred-v2"
type: "Convolution"
bottom: "rpn_conv1"
top: "rpn_bbox_pred"
param { lr_mult: 1 decay_mult: 1 }
param { lr_mult: 2 decay_mult: 0 }
convolution_param {
num_output: 100
#num_output: 144
kernel_size: 1 stride: 1 pad: 0
weight_filler { type: "gaussian" std: 0.01 }
bias_filler { type: "constant" value: 0 }
}
}

layer {
bottom: "rpn_cls_score"
top: "rpn_cls_score_reshape"
name: "rpn_cls_score_reshape"
type: "Reshape"
reshape_param { shape { dim: 0 dim: 2 dim: -1 dim: 0 } }
}
layer {
name: 'rpn-data'
type: 'Python'
bottom: 'rpn_cls_score'
bottom: 'gt_boxes'
bottom: 'im_info'
bottom: 'data'
top: 'rpn_labels'
top: 'rpn_bbox_targets'
top: 'rpn_bbox_inside_weights'
top: 'rpn_bbox_outside_weights'
include { phase: TRAIN }
python_param {
module: 'rpn.anchor_target_layer'
layer: 'AnchorTargetLayer'
param_str: "{'feat_stride': 8, 'ratios': [0.5, 0.667, 1, 1.5, 2], 'scales': [3, 6, 9, 16, 32]}"
}
}
layer {
name: "rpn_loss_cls"
type: "SoftmaxWithLoss"
bottom: "rpn_cls_score_reshape"
bottom: "rpn_labels"
propagate_down: 1
propagate_down: 0
top: "rpn_loss_cls"
include { phase: TRAIN }
loss_weight: 1
loss_param { ignore_label: -1 normalize: true }
}
layer {
name: "rpn_loss_bbox"
type: "SmoothL1Loss"
bottom: "rpn_bbox_pred"
bottom: "rpn_bbox_targets"
bottom: "rpn_bbox_inside_weights"
bottom: "rpn_bbox_outside_weights"
top: "rpn_loss_bbox"
include { phase: TRAIN }
loss_weight: 1
smooth_l1_loss_param { sigma: 3.0 }
}

################################################################################

Proposal

################################################################################
layer {
name: "rpn_cls_prob"
type: "Softmax"
bottom: "rpn_cls_score_reshape"
top: "rpn_cls_prob"
}
layer {
name: 'rpn_cls_prob_reshape'
type: 'Reshape'
bottom: 'rpn_cls_prob'
top: 'rpn_cls_prob_reshape'
reshape_param { shape { dim: 0 dim: 50 dim: -1 dim: 0 } }
}

layer {
name: 'proposal'
type: 'Proposal'
bottom: 'rpn_cls_prob_reshape'
bottom: 'rpn_bbox_pred'
bottom: 'im_info'
top: 'rois'
top: 'scores'
#include { phase: TEST }
proposal_param {
ratio: 0.5 ratio: 0.667 ratio: 1.0 ratio: 1.5 ratio: 2.0
scale: 3 scale: 6 scale: 9 scale: 16 scale: 32
base_size: 8
feat_stride: 8
pre_nms_topn: 6000
post_nms_topn: 4000
nms_thresh: 0.7
min_size: 8
}
}
layer {
name: 'mute_rpn_scores'
bottom: 'scores'
type: 'Silence'
}

layer {
name: 'roi-data'
type: 'Python'
bottom: 'rois'
bottom: 'gt_boxes'
top: 'rois'
top: 'labels'
top: 'bbox_targets'
top: 'bbox_inside_weights'
top: 'bbox_outside_weights'
python_param {
module: 'rpn.proposal_target_layer'
layer: 'ProposalTargetLayer'
param_str: "'num_classes': 21"
}
}

################################################################################

RCNN

################################################################################
layer {
name: "roi_pool_conv5"
type: "ROIPooling"
bottom: "upsample-2x"
bottom: "rois"
top: "roi_pool_conv5"
roi_pooling_param {
pooled_w: 6 pooled_h: 6
spatial_scale: 0.125# 1/8
}
}
layer {
name: "fc6_L"
type: "InnerProduct"
bottom: "roi_pool_conv5"
top: "fc6_L"
param { lr_mult: 1 decay_mult: 1 }
param { lr_mult: 2 decay_mult: 0 }
inner_product_param {
num_output: 512
weight_filler { type: "xavier" std: 0.005 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "fc6_U"
type: "InnerProduct"
bottom: "fc6_L"
top: "fc6_U"
param { lr_mult: 1 decay_mult: 1 }
param { lr_mult: 2 decay_mult: 0 }
inner_product_param {
num_output: 4096
weight_filler { type: "xavier" std: 0.005 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "relu6"
type: "ReLU"
bottom: "fc6_U"
top: "fc6_U"
}

################################################################################

fc 7

################################################################################
layer {
name: "fc7_L"
type: "InnerProduct"
bottom: "fc6_U"
top: "fc7_L"
param { lr_mult: 1 decay_mult: 1 }
param { lr_mult: 2 decay_mult: 0 }
inner_product_param {
num_output: 128
weight_filler { type: "xavier" std: 0.005 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "fc7_U"
type: "InnerProduct"
bottom: "fc7_L"
top: "fc7_U"
param { lr_mult: 1 decay_mult: 1 }
param { lr_mult: 2 decay_mult: 0 }
inner_product_param {
num_output: 4096
weight_filler { type: "xavier" std: 0.005 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "relu7"
type: "ReLU"
bottom: "fc7_U"
top: "fc7_U"
}

################################################################################

output

################################################################################
layer {
name: "cls_score"
type: "InnerProduct"
bottom: "fc7_U"
top: "cls_score"
param { lr_mult: 1 decay_mult: 1 }
param { lr_mult: 2 decay_mult: 0 }
inner_product_param {
num_output: 21
weight_filler { type: "gaussian" std: 0.01 }
bias_filler { type: "constant" value: 0 }
}
}
layer {
name: "bbox_pred"
type: "InnerProduct"
bottom: "fc7_U"
top: "bbox_pred"
param { lr_mult: 1 decay_mult: 1 }
param { lr_mult: 2 decay_mult: 0 }
inner_product_param {
num_output: 84
weight_filler { type: "gaussian" std: 0.001 }
bias_filler { type: "constant" value: 0 }
}
}
layer {
name: "loss_cls"
type: "SoftmaxWithLoss"
bottom: "cls_score"
bottom: "labels"
propagate_down: 1
propagate_down: 0
top: "loss_cls"
include { phase: TRAIN }
loss_weight: 1
loss_param { ignore_label: -1 normalize: true }
}
layer {
name: "loss_bbox"
type: "SmoothL1Loss"
bottom: "bbox_pred"
bottom: "bbox_targets"
bottom: "bbox_inside_weights"
bottom: "bbox_outside_weights"
top: "loss_bbox"
include { phase: TRAIN }
loss_weight: 1
}
#layer {

name: "cls_prob"

type: "Softmax"

bottom: "cls_score"

top: "cls_prob"

include { phase: TEST }

loss_param {

ignore_label: -1

normalize: true

}

#}

crashed when i training my own data..., please help...

When i train my own data(229+1 category), when i run to here:
'
self.solver = caffe.SGDSolver(solver_prototxt) in train.py line 43, crash happened. and i can not find where the caffe's log is. How can i enable the caffe's log in this python project?

I1020 17:44:25.036016 9109 net.cpp:228] data_input-data_0_split does not need backward computation.
I1020 17:44:25.036031 9109 net.cpp:228] input-data does not need backward computation.
I1020 17:44:25.036041 9109 net.cpp:270] This network produces output cls_loss
I1020 17:44:25.036052 9109 net.cpp:270] This network produces output loss_bbox
I1020 17:44:25.036065 9109 net.cpp:270] This network produces output rpn_cls_loss
I1020 17:44:25.036077 9109 net.cpp:270] This network produces output rpn_loss_bbox
I1020 17:44:25.036396 9109 net.cpp:283] Network initialization done.
I1020 17:44:25.037508 9109 solver.cpp:60] Solver scaffolding done.
Loading pretrained model weights from models/pvanet/imagenet/original.model
I1020 17:44:26.811261 9109 net.cpp:761] Ignoring source layer data
I1020 17:44:26.811317 9109 net.cpp:761] Ignoring source layer label_data_1_split
I1020 17:44:26.814769 9109 net.cpp:761] Ignoring source layer pool5
I1020 17:44:26.879279 9109 net.cpp:761] Ignoring source layer fc8
I1020 17:44:26.879341 9109 net.cpp:761] Ignoring source layer fc8_fc8_0_split
I1020 17:44:26.879362 9109 net.cpp:761] Ignoring source layer loss
I1020 17:44:26.879374 9109 net.cpp:761] Ignoring source layer accuracy
I1020 17:44:26.879390 9109 net.cpp:761] Ignoring source layer accuracy_top5
Traceback (most recent call last):
File "tools/train_net.py", line 112, in
max_iters=args.max_iters)
File "/disk/SX/pva-faster-rcnn/tools/../lib/fast_rcnn/train.py", line 157, in train_net
pretrained_model=pretrained_model)
File "/disk/SX/pva-faster-rcnn/tools/../lib/fast_rcnn/train.py", line 53, in init
self.solver.net.layers[0].set_roidb(roidb)
File "/disk/SX/pva-faster-rcnn/tools/../lib/roi_data_layer/layer.py", line 68, in set_roidb
self._shuffle_roidb_inds()
File "/disk/SX/pva-faster-rcnn/tools/../lib/roi_data_layer/layer.py", line 35, in _shuffle_roidb_inds
inds = np.reshape(inds, (-1, 2))
File "/usr/local/lib/python2.7/dist-packages/numpy/core/fromnumeric.py", line 225, in reshape
return reshape(newshape, order=order)
ValueError: total size of new array must be unchanged

where is the prototxt of new classification model?

Thx for your excellent job! I notice you have update PVANet, and put new classification model in ./pva-faster-rcnn/models/pvanet/ folder. I wonder where is its corresponding prototxt? I guess it should be different with the one in ./pva-faster-rcnn/models/pvanet_obsolete/imagenet/. Waiting for your reply!

Error happens when i train on my own dataset.

when I train with the pretrained model,it comes like this:

I0103 20:21:01.434712 28289 layer_factory.hpp:77] Creating layer proposal
AttributeError: 'module' object has no attribute 'ProposalLayer2'
Traceback (most recent call last):
File "./tools/train_net.py", line 112, in
max_iters=args.max_iters)
File "/home/panyiming/pva-faster-rcnn/tools/../lib/fast_rcnn/train.py", line 157, in train_net
pretrained_model=pretrained_model)
File "/home/panyiming/pva-faster-rcnn/tools/../lib/fast_rcnn/train.py", line 43, in init
self.solver = caffe.SGDSolver(solver_prototxt)
SystemError: NULL result without error in PyObject_Call

But when i train without pretrained model,it can begin to train,but the loss is 👍

I0103 20:31:44.717337 28915 solver.cpp:238] Iteration 0, loss = nan
I0103 20:31:44.717373 28915 solver.cpp:254] Train net output #0: loss_bbox = nan (* 1 = nan loss)
I0103 20:31:44.717381 28915 solver.cpp:254] Train net output #1: loss_cls = 87.3365 (* 1 = 87.3365 loss)
I0103 20:31:44.717384 28915 solver.cpp:254] Train net output #2: rpn_loss_bbox = nan (* 1 = nan loss)
I0103 20:31:44.717391 28915 solver.cpp:254] Train net output #3: rpn_loss_cls = 87.3365 (* 1 = 87.3365 loss)
I0103 20:31:44.717409 28915 sgd_solver.cpp:83] Plateau Status: Iteration 0, current minimum_loss = 3.40282e+38
I0103 20:31:44.717417 28915 sgd_solver.cpp:138] Iteration 0, lr = 0.001
I0103 20:31:51.010699 28915 solver.cpp:238] Iteration 20, loss = nan
I0103 20:31:51.010735 28915 solver.cpp:254] Train net output #0: loss_bbox = nan (* 1 = nan loss)
I0103 20:31:51.010743 28915 solver.cpp:254] Train net output #1: loss_cls = 87.3365 (* 1 = 87.3365 loss)
I0103 20:31:51.010749 28915 solver.cpp:254] Train net output #2: rpn_loss_bbox = nan (* 1 = nan loss)
I0103 20:31:51.010754 28915 solver.cpp:254] Train net output #3: rpn_loss_cls = 87.3365 (* 1 = 87.3365 loss)
I0103 20:31:51.010761 28915 sgd_solver.cpp:138] Iteration 20, lr = 0.001
I0103 20:31:57.685253 28915 solver.cpp:238] Iteration 40, loss = nan
I0103 20:31:57.685288 28915 solver.cpp:254] Train net output #0: loss_bbox = nan (* 1 = nan loss)
I0103 20:31:57.685295 28915 solver.cpp:254] Train net output #1: loss_cls = 87.3365 (* 1 = 87.3365 loss)
I0103 20:31:57.685300 28915 solver.cpp:254] Train net output #2: rpn_loss_bbox = nan (* 1 = nan loss)
I0103 20:31:57.685304 28915 solver.cpp:254] Train net output #3: rpn_loss_cls = 87.3365 (* 1 = 87.3365 loss)
I0103 20:31:57.685310 28915 sgd_solver.cpp:138] Iteration 40, lr = 0.001
I0103 20:32:04.648226 28915 solver.cpp:238] Iteration 60, loss = nan
I0103 20:32:04.648262 28915 solver.cpp:254] Train net output #0: loss_bbox = nan (* 1 = nan loss)
I0103 20:32:04.648270 28915 solver.cpp:254] Train net output #1: loss_cls = 87.3365 (* 1 = 87.3365 loss)
I0103 20:32:04.648275 28915 solver.cpp:254] Train net output #2: rpn_loss_bbox = nan (* 1 = nan loss)
I0103 20:32:04.648280 28915 solver.cpp:254] Train net output #3: rpn_loss_cls = 87.3365 (* 1 = 87.3365 loss)
I0103 20:32:04.648288 28915 sgd_solver.cpp:138] Iteration 60, lr = 0.001
I0103 20:32:11.306999 28915 solver.cpp:238] Iteration 80, loss = nan
I0103 20:32:11.307044 28915 solver.cpp:254] Train net output #0: loss_bbox = nan (* 1 = nan loss)
I0103 20:32:11.307052 28915 solver.cpp:254] Train net output #1: loss_cls = 87.3365 (* 1 = 87.3365 loss)
I0103 20:32:11.307057 28915 solver.cpp:254] Train net output #2: rpn_loss_bbox = nan (* 1 = nan loss)
I0103 20:32:11.307065 28915 solver.cpp:254] Train net output #3: rpn_loss_cls = 87.3365 (* 1 = 87.3365 loss)
I0103 20:32:11.307072 28915 sgd_solver.cpp:138] Iteration 80, lr = 0.001

About 8x8 feat_stride

In order to detect the small object, I using the deconvolution layer upsamping the hyper layer output,so I must re define the proposal layer. But I find my parameter to work.My prototxt like this:
name: "PVANET-lite"

################################################################################

Input

################################################################################

layer {
name: "input-data"
type: "DummyData"
top: "data"
top: "im_info"
dummy_data_param {
shape { dim: 1 dim: 3 dim: 960 dim: 1280 }
shape { dim: 1 dim: 6 }
}
}

################################################################################

Conv 1

################################################################################
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32
kernel_size: 4 stride: 2 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "conv1/bn"
type: "BatchNorm"
bottom: "conv1"
top: "conv1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "conv1/scale"
type: "Scale"
bottom: "conv1"
top: "conv1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "relu1"
type: "ReLU"
bottom: "conv1"
top: "conv1"
}

################################################################################

Conv 2

################################################################################
layer {
name: "conv2"
type: "Convolution"
bottom: "conv1"
top: "conv2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 48
kernel_size: 3 stride: 2 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "conv2/bn"
type: "BatchNorm"
bottom: "conv2"
top: "conv2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "conv2/scale"
type: "Scale"
bottom: "conv2"
top: "conv2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "relu2"
type: "ReLU"
bottom: "conv2"
top: "conv2"
}

################################################################################

Conv 3

################################################################################
layer {
name: "conv3"
type: "Convolution"
bottom: "conv2"
top: "conv3"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 96
kernel_size: 3 stride: 2 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "conv3/bn"
type: "BatchNorm"
bottom: "conv3"
top: "conv3"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "conv3/scale"
type: "Scale"
bottom: "conv3"
top: "conv3"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "relu3"
type: "ReLU"
bottom: "conv3"
top: "conv3"
}

################################################################################

Inception 3a

################################################################################
layer {
name: "inc3a/pool1"
type: "Pooling"
bottom: "conv3"
top: "inc3a/pool1"
pooling_param {
kernel_size: 3 stride: 2 pad: 0
pool: MAX
}
}
layer {
name: "inc3a/conv1"
type: "Convolution"
bottom: "inc3a/pool1"
top: "inc3a/conv1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 96 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3a/conv1/bn"
type: "BatchNorm"
bottom: "inc3a/conv1"
top: "inc3a/conv1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3a/conv1/scale"
type: "Scale"
bottom: "inc3a/conv1"
top: "inc3a/conv1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3a/relu1"
type: "ReLU"
bottom: "inc3a/conv1"
top: "inc3a/conv1"
}
layer {
name: "inc3a/conv3_1"
type: "Convolution"
bottom: "conv3"
top: "inc3a/conv3_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 16 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3a/conv3_1/bn"
type: "BatchNorm"
bottom: "inc3a/conv3_1"
top: "inc3a/conv3_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3a/conv3_1/scale"
type: "Scale"
bottom: "inc3a/conv3_1"
top: "inc3a/conv3_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3a/relu3_1"
type: "ReLU"
bottom: "inc3a/conv3_1"
top: "inc3a/conv3_1"
}
layer {
name: "inc3a/conv3_2"
type: "Convolution"
bottom: "inc3a/conv3_1"
top: "inc3a/conv3_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 64 kernel_size: 3 stride: 2 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3a/conv3_2/bn"
type: "BatchNorm"
bottom: "inc3a/conv3_2"
top: "inc3a/conv3_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3a/conv3_2/scale"
type: "Scale"
bottom: "inc3a/conv3_2"
top: "inc3a/conv3_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3a/relu3_2"
type: "ReLU"
bottom: "inc3a/conv3_2"
top: "inc3a/conv3_2"
}
layer {
name: "inc3a/conv5_1"
type: "Convolution"
bottom: "conv3"
top: "inc3a/conv5_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 16 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3a/conv5_1/bn"
type: "BatchNorm"
bottom: "inc3a/conv5_1"
top: "inc3a/conv5_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3a/conv5_1/scale"
type: "Scale"
bottom: "inc3a/conv5_1"
top: "inc3a/conv5_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3a/relu5_1"
type: "ReLU"
bottom: "inc3a/conv5_1"
top: "inc3a/conv5_1"
}
layer {
name: "inc3a/conv5_2"
type: "Convolution"
bottom: "inc3a/conv5_1"
top: "inc3a/conv5_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3a/conv5_2/bn"
type: "BatchNorm"
bottom: "inc3a/conv5_2"
top: "inc3a/conv5_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3a/conv5_2/scale"
type: "Scale"
bottom: "inc3a/conv5_2"
top: "inc3a/conv5_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3a/relu5_2"
type: "ReLU"
bottom: "inc3a/conv5_2"
top: "inc3a/conv5_2"
}
layer {
name: "inc3a/conv5_3"
type: "Convolution"
bottom: "inc3a/conv5_2"
top: "inc3a/conv5_3"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 stride: 2 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3a/conv5_3/bn"
type: "BatchNorm"
bottom: "inc3a/conv5_3"
top: "inc3a/conv5_3"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3a/conv5_3/scale"
type: "Scale"
bottom: "inc3a/conv5_3"
top: "inc3a/conv5_3"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3a/relu5_3"
type: "ReLU"
bottom: "inc3a/conv5_3"
top: "inc3a/conv5_3"
}
layer {
name: "inc3a"
type: "Concat"
bottom: "inc3a/conv1"
bottom: "inc3a/conv3_2"
bottom: "inc3a/conv5_3"
top: "inc3a"
}

################################################################################

Inception 3b

################################################################################
layer {
name: "inc3b/conv1"
type: "Convolution"
bottom: "inc3a"
top: "inc3b/conv1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 96 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3b/conv1/bn"
type: "BatchNorm"
bottom: "inc3b/conv1"
top: "inc3b/conv1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3b/conv1/scale"
type: "Scale"
bottom: "inc3b/conv1"
top: "inc3b/conv1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3b/relu1"
type: "ReLU"
bottom: "inc3b/conv1"
top: "inc3b/conv1"
}
layer {
name: "inc3b/conv3_1"
type: "Convolution"
bottom: "inc3a"
top: "inc3b/conv3_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 16 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3b/conv3_1/bn"
type: "BatchNorm"
bottom: "inc3b/conv3_1"
top: "inc3b/conv3_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3b/conv3_1/scale"
type: "Scale"
bottom: "inc3b/conv3_1"
top: "inc3b/conv3_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3b/relu3_1"
type: "ReLU"
bottom: "inc3b/conv3_1"
top: "inc3b/conv3_1"
}
layer {
name: "inc3b/conv3_2"
type: "Convolution"
bottom: "inc3b/conv3_1"
top: "inc3b/conv3_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 64 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3b/conv3_2/bn"
type: "BatchNorm"
bottom: "inc3b/conv3_2"
top: "inc3b/conv3_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3b/conv3_2/scale"
type: "Scale"
bottom: "inc3b/conv3_2"
top: "inc3b/conv3_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3b/relu3_2"
type: "ReLU"
bottom: "inc3b/conv3_2"
top: "inc3b/conv3_2"
}
layer {
name: "inc3b/conv5_1"
type: "Convolution"
bottom: "inc3a"
top: "inc3b/conv5_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 16 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3b/conv5_1/bn"
type: "BatchNorm"
bottom: "inc3b/conv5_1"
top: "inc3b/conv5_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3b/conv5_1/scale"
type: "Scale"
bottom: "inc3b/conv5_1"
top: "inc3b/conv5_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3b/relu5_1"
type: "ReLU"
bottom: "inc3b/conv5_1"
top: "inc3b/conv5_1"
}
layer {
name: "inc3b/conv5_2"
type: "Convolution"
bottom: "inc3b/conv5_1"
top: "inc3b/conv5_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3b/conv5_2/bn"
type: "BatchNorm"
bottom: "inc3b/conv5_2"
top: "inc3b/conv5_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3b/conv5_2/scale"
type: "Scale"
bottom: "inc3b/conv5_2"
top: "inc3b/conv5_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3b/relu5_2"
type: "ReLU"
bottom: "inc3b/conv5_2"
top: "inc3b/conv5_2"
}
layer {
name: "inc3b/conv5_3"
type: "Convolution"
bottom: "inc3b/conv5_2"
top: "inc3b/conv5_3"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3b/conv5_3/bn"
type: "BatchNorm"
bottom: "inc3b/conv5_3"
top: "inc3b/conv5_3"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3b/conv5_3/scale"
type: "Scale"
bottom: "inc3b/conv5_3"
top: "inc3b/conv5_3"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3b/relu5_3"
type: "ReLU"
bottom: "inc3b/conv5_3"
top: "inc3b/conv5_3"
}
layer {
name: "inc3b"
type: "Concat"
bottom: "inc3b/conv1"
bottom: "inc3b/conv3_2"
bottom: "inc3b/conv5_3"
top: "inc3b"
}

################################################################################

Inception 3c

################################################################################
layer {
name: "inc3c/conv1"
type: "Convolution"
bottom: "inc3b"
top: "inc3c/conv1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 96 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3c/conv1/bn"
type: "BatchNorm"
bottom: "inc3c/conv1"
top: "inc3c/conv1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3c/conv1/scale"
type: "Scale"
bottom: "inc3c/conv1"
top: "inc3c/conv1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3c/relu1"
type: "ReLU"
bottom: "inc3c/conv1"
top: "inc3c/conv1"
}
layer {
name: "inc3c/conv3_1"
type: "Convolution"
bottom: "inc3b"
top: "inc3c/conv3_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 16 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3c/conv3_1/bn"
type: "BatchNorm"
bottom: "inc3c/conv3_1"
top: "inc3c/conv3_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3c/conv3_1/scale"
type: "Scale"
bottom: "inc3c/conv3_1"
top: "inc3c/conv3_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3c/relu3_1"
type: "ReLU"
bottom: "inc3c/conv3_1"
top: "inc3c/conv3_1"
}
layer {
name: "inc3c/conv3_2"
type: "Convolution"
bottom: "inc3c/conv3_1"
top: "inc3c/conv3_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 64 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3c/conv3_2/bn"
type: "BatchNorm"
bottom: "inc3c/conv3_2"
top: "inc3c/conv3_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3c/conv3_2/scale"
type: "Scale"
bottom: "inc3c/conv3_2"
top: "inc3c/conv3_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3c/relu3_2"
type: "ReLU"
bottom: "inc3c/conv3_2"
top: "inc3c/conv3_2"
}
layer {
name: "inc3c/conv5_1"
type: "Convolution"
bottom: "inc3b"
top: "inc3c/conv5_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 16 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3c/conv5_1/bn"
type: "BatchNorm"
bottom: "inc3c/conv5_1"
top: "inc3c/conv5_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3c/conv5_1/scale"
type: "Scale"
bottom: "inc3c/conv5_1"
top: "inc3c/conv5_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3c/relu5_1"
type: "ReLU"
bottom: "inc3c/conv5_1"
top: "inc3c/conv5_1"
}
layer {
name: "inc3c/conv5_2"
type: "Convolution"
bottom: "inc3c/conv5_1"
top: "inc3c/conv5_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3c/conv5_2/bn"
type: "BatchNorm"
bottom: "inc3c/conv5_2"
top: "inc3c/conv5_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3c/conv5_2/scale"
type: "Scale"
bottom: "inc3c/conv5_2"
top: "inc3c/conv5_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3c/relu5_2"
type: "ReLU"
bottom: "inc3c/conv5_2"
top: "inc3c/conv5_2"
}
layer {
name: "inc3c/conv5_3"
type: "Convolution"
bottom: "inc3c/conv5_2"
top: "inc3c/conv5_3"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3c/conv5_3/bn"
type: "BatchNorm"
bottom: "inc3c/conv5_3"
top: "inc3c/conv5_3"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3c/conv5_3/scale"
type: "Scale"
bottom: "inc3c/conv5_3"
top: "inc3c/conv5_3"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3c/relu5_3"
type: "ReLU"
bottom: "inc3c/conv5_3"
top: "inc3c/conv5_3"
}
layer {
name: "inc3c"
type: "Concat"
bottom: "inc3c/conv1"
bottom: "inc3c/conv3_2"
bottom: "inc3c/conv5_3"
top: "inc3c"
}

################################################################################

Inception 3d

################################################################################
layer {
name: "inc3d/conv1"
type: "Convolution"
bottom: "inc3c"
top: "inc3d/conv1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 96 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3d/conv1/bn"
type: "BatchNorm"
bottom: "inc3d/conv1"
top: "inc3d/conv1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3d/conv1/scale"
type: "Scale"
bottom: "inc3d/conv1"
top: "inc3d/conv1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3d/relu1"
type: "ReLU"
bottom: "inc3d/conv1"
top: "inc3d/conv1"
}
layer {
name: "inc3d/conv3_1"
type: "Convolution"
bottom: "inc3c"
top: "inc3d/conv3_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 16 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3d/conv3_1/bn"
type: "BatchNorm"
bottom: "inc3d/conv3_1"
top: "inc3d/conv3_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3d/conv3_1/scale"
type: "Scale"
bottom: "inc3d/conv3_1"
top: "inc3d/conv3_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3d/relu3_1"
type: "ReLU"
bottom: "inc3d/conv3_1"
top: "inc3d/conv3_1"
}
layer {
name: "inc3d/conv3_2"
type: "Convolution"
bottom: "inc3d/conv3_1"
top: "inc3d/conv3_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 64 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3d/conv3_2/bn"
type: "BatchNorm"
bottom: "inc3d/conv3_2"
top: "inc3d/conv3_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3d/conv3_2/scale"
type: "Scale"
bottom: "inc3d/conv3_2"
top: "inc3d/conv3_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3d/relu3_2"
type: "ReLU"
bottom: "inc3d/conv3_2"
top: "inc3d/conv3_2"
}
layer {
name: "inc3d/conv5_1"
type: "Convolution"
bottom: "inc3c"
top: "inc3d/conv5_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 16 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3d/conv5_1/bn"
type: "BatchNorm"
bottom: "inc3d/conv5_1"
top: "inc3d/conv5_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3d/conv5_1/scale"
type: "Scale"
bottom: "inc3d/conv5_1"
top: "inc3d/conv5_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3d/relu5_1"
type: "ReLU"
bottom: "inc3d/conv5_1"
top: "inc3d/conv5_1"
}
layer {
name: "inc3d/conv5_2"
type: "Convolution"
bottom: "inc3d/conv5_1"
top: "inc3d/conv5_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3d/conv5_2/bn"
type: "BatchNorm"
bottom: "inc3d/conv5_2"
top: "inc3d/conv5_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3d/conv5_2/scale"
type: "Scale"
bottom: "inc3d/conv5_2"
top: "inc3d/conv5_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3d/relu5_2"
type: "ReLU"
bottom: "inc3d/conv5_2"
top: "inc3d/conv5_2"
}
layer {
name: "inc3d/conv5_3"
type: "Convolution"
bottom: "inc3d/conv5_2"
top: "inc3d/conv5_3"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3d/conv5_3/bn"
type: "BatchNorm"
bottom: "inc3d/conv5_3"
top: "inc3d/conv5_3"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3d/conv5_3/scale"
type: "Scale"
bottom: "inc3d/conv5_3"
top: "inc3d/conv5_3"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3d/relu5_3"
type: "ReLU"
bottom: "inc3d/conv5_3"
top: "inc3d/conv5_3"
}
layer {
name: "inc3d"
type: "Concat"
bottom: "inc3d/conv1"
bottom: "inc3d/conv3_2"
bottom: "inc3d/conv5_3"
top: "inc3d"
}

################################################################################

Inception 3e

################################################################################
layer {
name: "inc3e/conv1"
type: "Convolution"
bottom: "inc3d"
top: "inc3e/conv1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 96 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3e/conv1/bn"
type: "BatchNorm"
bottom: "inc3e/conv1"
top: "inc3e/conv1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3e/conv1/scale"
type: "Scale"
bottom: "inc3e/conv1"
top: "inc3e/conv1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3e/relu1"
type: "ReLU"
bottom: "inc3e/conv1"
top: "inc3e/conv1"
}
layer {
name: "inc3e/conv3_1"
type: "Convolution"
bottom: "inc3d"
top: "inc3e/conv3_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 16 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3e/conv3_1/bn"
type: "BatchNorm"
bottom: "inc3e/conv3_1"
top: "inc3e/conv3_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3e/conv3_1/scale"
type: "Scale"
bottom: "inc3e/conv3_1"
top: "inc3e/conv3_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3e/relu3_1"
type: "ReLU"
bottom: "inc3e/conv3_1"
top: "inc3e/conv3_1"
}
layer {
name: "inc3e/conv3_2"
type: "Convolution"
bottom: "inc3e/conv3_1"
top: "inc3e/conv3_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 64 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3e/conv3_2/bn"
type: "BatchNorm"
bottom: "inc3e/conv3_2"
top: "inc3e/conv3_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3e/conv3_2/scale"
type: "Scale"
bottom: "inc3e/conv3_2"
top: "inc3e/conv3_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3e/relu3_2"
type: "ReLU"
bottom: "inc3e/conv3_2"
top: "inc3e/conv3_2"
}
layer {
name: "inc3e/conv5_1"
type: "Convolution"
bottom: "inc3d"
top: "inc3e/conv5_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 16 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3e/conv5_1/bn"
type: "BatchNorm"
bottom: "inc3e/conv5_1"
top: "inc3e/conv5_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3e/conv5_1/scale"
type: "Scale"
bottom: "inc3e/conv5_1"
top: "inc3e/conv5_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3e/relu5_1"
type: "ReLU"
bottom: "inc3e/conv5_1"
top: "inc3e/conv5_1"
}
layer {
name: "inc3e/conv5_2"
type: "Convolution"
bottom: "inc3e/conv5_1"
top: "inc3e/conv5_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3e/conv5_2/bn"
type: "BatchNorm"
bottom: "inc3e/conv5_2"
top: "inc3e/conv5_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3e/conv5_2/scale"
type: "Scale"
bottom: "inc3e/conv5_2"
top: "inc3e/conv5_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3e/relu5_2"
type: "ReLU"
bottom: "inc3e/conv5_2"
top: "inc3e/conv5_2"
}
layer {
name: "inc3e/conv5_3"
type: "Convolution"
bottom: "inc3e/conv5_2"
top: "inc3e/conv5_3"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc3e/conv5_3/bn"
type: "BatchNorm"
bottom: "inc3e/conv5_3"
top: "inc3e/conv5_3"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc3e/conv5_3/scale"
type: "Scale"
bottom: "inc3e/conv5_3"
top: "inc3e/conv5_3"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc3e/relu5_3"
type: "ReLU"
bottom: "inc3e/conv5_3"
top: "inc3e/conv5_3"
}
layer {
name: "inc3e"
type: "Concat"
bottom: "inc3e/conv1"
bottom: "inc3e/conv3_2"
bottom: "inc3e/conv5_3"
top: "inc3e"
}

################################################################################

Inception 4a

################################################################################
layer {
name: "inc4a/pool1"
type: "Pooling"
bottom: "inc3e"
top: "inc4a/pool1"
pooling_param {
kernel_size: 3 stride: 2 pad: 0
pool: MAX
}
}
layer {
name: "inc4a/conv1"
type: "Convolution"
bottom: "inc4a/pool1"
top: "inc4a/conv1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 128 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4a/conv1/bn"
type: "BatchNorm"
bottom: "inc4a/conv1"
top: "inc4a/conv1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4a/conv1/scale"
type: "Scale"
bottom: "inc4a/conv1"
top: "inc4a/conv1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4a/relu1"
type: "ReLU"
bottom: "inc4a/conv1"
top: "inc4a/conv1"
}
layer {
name: "inc4a/conv3_1"
type: "Convolution"
bottom: "inc3e"
top: "inc4a/conv3_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4a/conv3_1/bn"
type: "BatchNorm"
bottom: "inc4a/conv3_1"
top: "inc4a/conv3_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4a/conv3_1/scale"
type: "Scale"
bottom: "inc4a/conv3_1"
top: "inc4a/conv3_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4a/relu3_1"
type: "ReLU"
bottom: "inc4a/conv3_1"
top: "inc4a/conv3_1"
}
layer {
name: "inc4a/conv3_2"
type: "Convolution"
bottom: "inc4a/conv3_1"
top: "inc4a/conv3_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 96 kernel_size: 3 stride: 2 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4a/conv3_2/bn"
type: "BatchNorm"
bottom: "inc4a/conv3_2"
top: "inc4a/conv3_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4a/conv3_2/scale"
type: "Scale"
bottom: "inc4a/conv3_2"
top: "inc4a/conv3_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4a/relu3_2"
type: "ReLU"
bottom: "inc4a/conv3_2"
top: "inc4a/conv3_2"
}
layer {
name: "inc4a/conv5_1"
type: "Convolution"
bottom: "inc3e"
top: "inc4a/conv5_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 16 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4a/conv5_1/bn"
type: "BatchNorm"
bottom: "inc4a/conv5_1"
top: "inc4a/conv5_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4a/conv5_1/scale"
type: "Scale"
bottom: "inc4a/conv5_1"
top: "inc4a/conv5_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4a/relu5_1"
type: "ReLU"
bottom: "inc4a/conv5_1"
top: "inc4a/conv5_1"
}
layer {
name: "inc4a/conv5_2"
type: "Convolution"
bottom: "inc4a/conv5_1"
top: "inc4a/conv5_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4a/conv5_2/bn"
type: "BatchNorm"
bottom: "inc4a/conv5_2"
top: "inc4a/conv5_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4a/conv5_2/scale"
type: "Scale"
bottom: "inc4a/conv5_2"
top: "inc4a/conv5_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4a/relu5_2"
type: "ReLU"
bottom: "inc4a/conv5_2"
top: "inc4a/conv5_2"
}
layer {
name: "inc4a/conv5_3"
type: "Convolution"
bottom: "inc4a/conv5_2"
top: "inc4a/conv5_3"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 stride: 2 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4a/conv5_3/bn"
type: "BatchNorm"
bottom: "inc4a/conv5_3"
top: "inc4a/conv5_3"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4a/conv5_3/scale"
type: "Scale"
bottom: "inc4a/conv5_3"
top: "inc4a/conv5_3"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4a/relu5_3"
type: "ReLU"
bottom: "inc4a/conv5_3"
top: "inc4a/conv5_3"
}
layer {
name: "inc4a"
type: "Concat"
bottom: "inc4a/conv1"
bottom: "inc4a/conv3_2"
bottom: "inc4a/conv5_3"
top: "inc4a"
}

################################################################################

Inception 4b

################################################################################
layer {
name: "inc4b/conv1"
type: "Convolution"
bottom: "inc4a"
top: "inc4b/conv1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 128 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4b/conv1/bn"
type: "BatchNorm"
bottom: "inc4b/conv1"
top: "inc4b/conv1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4b/conv1/scale"
type: "Scale"
bottom: "inc4b/conv1"
top: "inc4b/conv1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4b/relu1"
type: "ReLU"
bottom: "inc4b/conv1"
top: "inc4b/conv1"
}
layer {
name: "inc4b/conv3_1"
type: "Convolution"
bottom: "inc4a"
top: "inc4b/conv3_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4b/conv3_1/bn"
type: "BatchNorm"
bottom: "inc4b/conv3_1"
top: "inc4b/conv3_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4b/conv3_1/scale"
type: "Scale"
bottom: "inc4b/conv3_1"
top: "inc4b/conv3_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4b/relu3_1"
type: "ReLU"
bottom: "inc4b/conv3_1"
top: "inc4b/conv3_1"
}
layer {
name: "inc4b/conv3_2"
type: "Convolution"
bottom: "inc4b/conv3_1"
top: "inc4b/conv3_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 96 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4b/conv3_2/bn"
type: "BatchNorm"
bottom: "inc4b/conv3_2"
top: "inc4b/conv3_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4b/conv3_2/scale"
type: "Scale"
bottom: "inc4b/conv3_2"
top: "inc4b/conv3_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4b/relu3_2"
type: "ReLU"
bottom: "inc4b/conv3_2"
top: "inc4b/conv3_2"
}
layer {
name: "inc4b/conv5_1"
type: "Convolution"
bottom: "inc4a"
top: "inc4b/conv5_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 16 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4b/conv5_1/bn"
type: "BatchNorm"
bottom: "inc4b/conv5_1"
top: "inc4b/conv5_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4b/conv5_1/scale"
type: "Scale"
bottom: "inc4b/conv5_1"
top: "inc4b/conv5_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4b/relu5_1"
type: "ReLU"
bottom: "inc4b/conv5_1"
top: "inc4b/conv5_1"
}
layer {
name: "inc4b/conv5_2"
type: "Convolution"
bottom: "inc4b/conv5_1"
top: "inc4b/conv5_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4b/conv5_2/bn"
type: "BatchNorm"
bottom: "inc4b/conv5_2"
top: "inc4b/conv5_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4b/conv5_2/scale"
type: "Scale"
bottom: "inc4b/conv5_2"
top: "inc4b/conv5_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4b/relu5_2"
type: "ReLU"
bottom: "inc4b/conv5_2"
top: "inc4b/conv5_2"
}
layer {
name: "inc4b/conv5_3"
type: "Convolution"
bottom: "inc4b/conv5_2"
top: "inc4b/conv5_3"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4b/conv5_3/bn"
type: "BatchNorm"
bottom: "inc4b/conv5_3"
top: "inc4b/conv5_3"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4b/conv5_3/scale"
type: "Scale"
bottom: "inc4b/conv5_3"
top: "inc4b/conv5_3"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4b/relu5_3"
type: "ReLU"
bottom: "inc4b/conv5_3"
top: "inc4b/conv5_3"
}
layer {
name: "inc4b"
type: "Concat"
bottom: "inc4b/conv1"
bottom: "inc4b/conv3_2"
bottom: "inc4b/conv5_3"
top: "inc4b"
}

################################################################################

Inception 4c

################################################################################
layer {
name: "inc4c/conv1"
type: "Convolution"
bottom: "inc4b"
top: "inc4c/conv1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 128 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4c/conv1/bn"
type: "BatchNorm"
bottom: "inc4c/conv1"
top: "inc4c/conv1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4c/conv1/scale"
type: "Scale"
bottom: "inc4c/conv1"
top: "inc4c/conv1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4c/relu1"
type: "ReLU"
bottom: "inc4c/conv1"
top: "inc4c/conv1"
}
layer {
name: "inc4c/conv3_1"
type: "Convolution"
bottom: "inc4b"
top: "inc4c/conv3_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4c/conv3_1/bn"
type: "BatchNorm"
bottom: "inc4c/conv3_1"
top: "inc4c/conv3_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4c/conv3_1/scale"
type: "Scale"
bottom: "inc4c/conv3_1"
top: "inc4c/conv3_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4c/relu3_1"
type: "ReLU"
bottom: "inc4c/conv3_1"
top: "inc4c/conv3_1"
}
layer {
name: "inc4c/conv3_2"
type: "Convolution"
bottom: "inc4c/conv3_1"
top: "inc4c/conv3_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 96 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4c/conv3_2/bn"
type: "BatchNorm"
bottom: "inc4c/conv3_2"
top: "inc4c/conv3_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4c/conv3_2/scale"
type: "Scale"
bottom: "inc4c/conv3_2"
top: "inc4c/conv3_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4c/relu3_2"
type: "ReLU"
bottom: "inc4c/conv3_2"
top: "inc4c/conv3_2"
}
layer {
name: "inc4c/conv5_1"
type: "Convolution"
bottom: "inc4b"
top: "inc4c/conv5_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 16 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4c/conv5_1/bn"
type: "BatchNorm"
bottom: "inc4c/conv5_1"
top: "inc4c/conv5_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4c/conv5_1/scale"
type: "Scale"
bottom: "inc4c/conv5_1"
top: "inc4c/conv5_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4c/relu5_1"
type: "ReLU"
bottom: "inc4c/conv5_1"
top: "inc4c/conv5_1"
}
layer {
name: "inc4c/conv5_2"
type: "Convolution"
bottom: "inc4c/conv5_1"
top: "inc4c/conv5_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4c/conv5_2/bn"
type: "BatchNorm"
bottom: "inc4c/conv5_2"
top: "inc4c/conv5_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4c/conv5_2/scale"
type: "Scale"
bottom: "inc4c/conv5_2"
top: "inc4c/conv5_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4c/relu5_2"
type: "ReLU"
bottom: "inc4c/conv5_2"
top: "inc4c/conv5_2"
}
layer {
name: "inc4c/conv5_3"
type: "Convolution"
bottom: "inc4c/conv5_2"
top: "inc4c/conv5_3"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4c/conv5_3/bn"
type: "BatchNorm"
bottom: "inc4c/conv5_3"
top: "inc4c/conv5_3"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4c/conv5_3/scale"
type: "Scale"
bottom: "inc4c/conv5_3"
top: "inc4c/conv5_3"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4c/relu5_3"
type: "ReLU"
bottom: "inc4c/conv5_3"
top: "inc4c/conv5_3"
}
layer {
name: "inc4c"
type: "Concat"
bottom: "inc4c/conv1"
bottom: "inc4c/conv3_2"
bottom: "inc4c/conv5_3"
top: "inc4c"
}

################################################################################

Inception 4d

################################################################################
layer {
name: "inc4d/conv1"
type: "Convolution"
bottom: "inc4c"
top: "inc4d/conv1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 128 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4d/conv1/bn"
type: "BatchNorm"
bottom: "inc4d/conv1"
top: "inc4d/conv1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4d/conv1/scale"
type: "Scale"
bottom: "inc4d/conv1"
top: "inc4d/conv1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4d/relu1"
type: "ReLU"
bottom: "inc4d/conv1"
top: "inc4d/conv1"
}
layer {
name: "inc4d/conv3_1"
type: "Convolution"
bottom: "inc4c"
top: "inc4d/conv3_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4d/conv3_1/bn"
type: "BatchNorm"
bottom: "inc4d/conv3_1"
top: "inc4d/conv3_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4d/conv3_1/scale"
type: "Scale"
bottom: "inc4d/conv3_1"
top: "inc4d/conv3_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4d/relu3_1"
type: "ReLU"
bottom: "inc4d/conv3_1"
top: "inc4d/conv3_1"
}
layer {
name: "inc4d/conv3_2"
type: "Convolution"
bottom: "inc4d/conv3_1"
top: "inc4d/conv3_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 96 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4d/conv3_2/bn"
type: "BatchNorm"
bottom: "inc4d/conv3_2"
top: "inc4d/conv3_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4d/conv3_2/scale"
type: "Scale"
bottom: "inc4d/conv3_2"
top: "inc4d/conv3_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4d/relu3_2"
type: "ReLU"
bottom: "inc4d/conv3_2"
top: "inc4d/conv3_2"
}
layer {
name: "inc4d/conv5_1"
type: "Convolution"
bottom: "inc4c"
top: "inc4d/conv5_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 16 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4d/conv5_1/bn"
type: "BatchNorm"
bottom: "inc4d/conv5_1"
top: "inc4d/conv5_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4d/conv5_1/scale"
type: "Scale"
bottom: "inc4d/conv5_1"
top: "inc4d/conv5_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4d/relu5_1"
type: "ReLU"
bottom: "inc4d/conv5_1"
top: "inc4d/conv5_1"
}
layer {
name: "inc4d/conv5_2"
type: "Convolution"
bottom: "inc4d/conv5_1"
top: "inc4d/conv5_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4d/conv5_2/bn"
type: "BatchNorm"
bottom: "inc4d/conv5_2"
top: "inc4d/conv5_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4d/conv5_2/scale"
type: "Scale"
bottom: "inc4d/conv5_2"
top: "inc4d/conv5_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4d/relu5_2"
type: "ReLU"
bottom: "inc4d/conv5_2"
top: "inc4d/conv5_2"
}
layer {
name: "inc4d/conv5_3"
type: "Convolution"
bottom: "inc4d/conv5_2"
top: "inc4d/conv5_3"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4d/conv5_3/bn"
type: "BatchNorm"
bottom: "inc4d/conv5_3"
top: "inc4d/conv5_3"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4d/conv5_3/scale"
type: "Scale"
bottom: "inc4d/conv5_3"
top: "inc4d/conv5_3"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4d/relu5_3"
type: "ReLU"
bottom: "inc4d/conv5_3"
top: "inc4d/conv5_3"
}
layer {
name: "inc4d"
type: "Concat"
bottom: "inc4d/conv1"
bottom: "inc4d/conv3_2"
bottom: "inc4d/conv5_3"
top: "inc4d"
}

################################################################################

Inception 4e

################################################################################
layer {
name: "inc4e/conv1"
type: "Convolution"
bottom: "inc4d"
top: "inc4e/conv1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 128 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4e/conv1/bn"
type: "BatchNorm"
bottom: "inc4e/conv1"
top: "inc4e/conv1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4e/conv1/scale"
type: "Scale"
bottom: "inc4e/conv1"
top: "inc4e/conv1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4e/relu1"
type: "ReLU"
bottom: "inc4e/conv1"
top: "inc4e/conv1"
}
layer {
name: "inc4e/conv3_1"
type: "Convolution"
bottom: "inc4d"
top: "inc4e/conv3_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4e/conv3_1/bn"
type: "BatchNorm"
bottom: "inc4e/conv3_1"
top: "inc4e/conv3_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4e/conv3_1/scale"
type: "Scale"
bottom: "inc4e/conv3_1"
top: "inc4e/conv3_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4e/relu3_1"
type: "ReLU"
bottom: "inc4e/conv3_1"
top: "inc4e/conv3_1"
}
layer {
name: "inc4e/conv3_2"
type: "Convolution"
bottom: "inc4e/conv3_1"
top: "inc4e/conv3_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 96 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4e/conv3_2/bn"
type: "BatchNorm"
bottom: "inc4e/conv3_2"
top: "inc4e/conv3_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4e/conv3_2/scale"
type: "Scale"
bottom: "inc4e/conv3_2"
top: "inc4e/conv3_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4e/relu3_2"
type: "ReLU"
bottom: "inc4e/conv3_2"
top: "inc4e/conv3_2"
}
layer {
name: "inc4e/conv5_1"
type: "Convolution"
bottom: "inc4d"
top: "inc4e/conv5_1"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 16 kernel_size: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4e/conv5_1/bn"
type: "BatchNorm"
bottom: "inc4e/conv5_1"
top: "inc4e/conv5_1"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4e/conv5_1/scale"
type: "Scale"
bottom: "inc4e/conv5_1"
top: "inc4e/conv5_1"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4e/relu5_1"
type: "ReLU"
bottom: "inc4e/conv5_1"
top: "inc4e/conv5_1"
}
layer {
name: "inc4e/conv5_2"
type: "Convolution"
bottom: "inc4e/conv5_1"
top: "inc4e/conv5_2"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4e/conv5_2/bn"
type: "BatchNorm"
bottom: "inc4e/conv5_2"
top: "inc4e/conv5_2"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4e/conv5_2/scale"
type: "Scale"
bottom: "inc4e/conv5_2"
top: "inc4e/conv5_2"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4e/relu5_2"
type: "ReLU"
bottom: "inc4e/conv5_2"
top: "inc4e/conv5_2"
}
layer {
name: "inc4e/conv5_3"
type: "Convolution"
bottom: "inc4e/conv5_2"
top: "inc4e/conv5_3"
param { lr_mult: 0.1 decay_mult: 0.1 }
param { lr_mult: 0.2 decay_mult: 0 }
convolution_param {
num_output: 32 kernel_size: 3 pad: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "inc4e/conv5_3/bn"
type: "BatchNorm"
bottom: "inc4e/conv5_3"
top: "inc4e/conv5_3"
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
param { lr_mult: 0 decay_mult: 0 }
batch_norm_param { use_global_stats: true }
}
layer {
name: "inc4e/conv5_3/scale"
type: "Scale"
bottom: "inc4e/conv5_3"
top: "inc4e/conv5_3"
param { lr_mult: 0.1 decay_mult: 0 }
param { lr_mult: 0.1 decay_mult: 0 }
scale_param { bias_term: true }
}
layer {
name: "inc4e/relu5_3"
type: "ReLU"
bottom: "inc4e/conv5_3"
top: "inc4e/conv5_3"
}
layer {
name: "inc4e"
type: "Concat"
bottom: "inc4e/conv1"
bottom: "inc4e/conv3_2"
bottom: "inc4e/conv5_3"
top: "inc4e"
}

################################################################################

hyper feature

################################################################################
layer {
name: "downsample"
type: "Pooling"
bottom: "conv3"
top: "downsample"
pooling_param {
kernel_size: 3 stride: 2 pad: 0
pool: MAX
}
}
layer {
name: "upsample"
type: "Deconvolution"
bottom: "inc4e"
top: "upsample"
param { lr_mult: 0 decay_mult: 0 }
convolution_param {
num_output: 256
kernel_size: 4 stride: 2 pad: 1
group: 256
weight_filler: { type: "bilinear" }
bias_term: false
}
}
layer {
name: "concat"
type: "Concat"
bottom: "downsample"
bottom: "inc3e"
bottom: "upsample"
top: "concat"
concat_param { axis: 1 }
}
layer {
name: "convf"
type: "Convolution"
bottom: "concat"
top: "convf"
param { lr_mult: 1 decay_mult: 1 }
param { lr_mult: 2 decay_mult: 0 }
convolution_param {
num_output: 256
kernel_size: 1 stride: 1 pad: 0
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "reluf"
type: "ReLU"
bottom: "convf"
top: "convf"
}
layer {
name: "upsample-2x"
type: "Deconvolution"
bottom: "convf"
top: "upsample-2x"
param { lr_mult: 0 decay_mult: 0 }
convolution_param {
num_output: 256
kernel_size: 4 stride: 2 pad: 1
group: 256
weight_filler: { type: "bilinear" }
bias_term: false
}
}

################################################################################

RPN

################################################################################

RPN

layer {
name: "rpn_conv1"
type: "Convolution"
bottom: "upsample-2x"
top: "rpn_conv1"
param { lr_mult: 1 decay_mult: 1 }
param { lr_mult: 2 decay_mult: 0 }
convolution_param {
num_output: 256
kernel_size: 1 stride: 1 pad: 0
weight_filler { type: "gaussian" std: 0.01 }
bias_filler { type: "constant" value: 0 }
}
}
layer {
name: "rpn_relu1"
type: "ReLU"
bottom: "rpn_conv1"
top: "rpn_conv1"
}

layer {
name: "rpn_cls_score"
type: "Convolution"
bottom: "rpn_conv1"
top: "rpn_cls_score"
param { lr_mult: 1 decay_mult: 1 }
param { lr_mult: 2 decay_mult: 0 }
convolution_param {
num_output: 50
kernel_size: 1 stride: 1 pad: 0
weight_filler { type: "gaussian" std: 0.01 }
bias_filler { type: "constant" value: 0 }
}
}
layer {
name: "rpn_bbox_pred"
type: "Convolution"
bottom: "rpn_conv1"
top: "rpn_bbox_pred"
param { lr_mult: 1 decay_mult: 1 }
param { lr_mult: 2 decay_mult: 0 }
convolution_param {
num_output: 100
kernel_size: 1 stride: 1 pad: 0
weight_filler { type: "gaussian" std: 0.01 }
bias_filler { type: "constant" value: 0 }
}
}

layer {
bottom: "rpn_cls_score"
top: "rpn_cls_score_reshape"
name: "rpn_cls_score_reshape"
type: "Reshape"
reshape_param { shape { dim: 0 dim: 2 dim: -1 dim: 0 } }
}

################################################################################

Proposal

################################################################################
layer {
name: "rpn_cls_prob"
type: "Softmax"
bottom: "rpn_cls_score_reshape"
top: "rpn_cls_prob"
}
layer {
name: 'rpn_cls_prob_reshape'
type: 'Reshape'
bottom: 'rpn_cls_prob'
top: 'rpn_cls_prob_reshape'
reshape_param { shape { dim: 0 dim: 50 dim: -1 dim: 0 } }
}

layer {
name: 'proposal'
type: 'Proposal'
bottom: 'rpn_cls_prob_reshape'
bottom: 'rpn_bbox_pred'
bottom: 'im_info'
top: 'rois'
top: 'scores'
#include { phase: TEST }
proposal_param {
ratio: 0.5 ratio: 0.667 ratio: 1.0 ratio: 1.5 ratio: 2.0
scale: 3 scale: 6 scale: 9 scale: 16 scale: 32
base_size: 8
feat_stride: 8
pre_nms_topn: 6000
post_nms_topn: 400
nms_thresh: 0.3
min_size: 8
}
}

################################################################################

RCNN

################################################################################
layer {
name: "roi_pool_conv5"
type: "ROIPooling"
bottom: "upsample-2x"
bottom: "rois"
top: "roi_pool_conv5"
roi_pooling_param {
pooled_w: 6 pooled_h: 6
spatial_scale: 0.125# 1/16
}
}
layer {
name: "fc6_L"
type: "InnerProduct"
bottom: "roi_pool_conv5"
top: "fc6_L"
param { lr_mult: 1 decay_mult: 1 }
param { lr_mult: 2 decay_mult: 0 }
inner_product_param {
num_output: 512
weight_filler { type: "xavier" std: 0.005 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "fc6_U"
type: "InnerProduct"
bottom: "fc6_L"
top: "fc6_U"
param { lr_mult: 1 decay_mult: 1 }
param { lr_mult: 2 decay_mult: 0 }
inner_product_param {
num_output: 4096
weight_filler { type: "xavier" std: 0.005 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "relu6"
type: "ReLU"
bottom: "fc6_U"
top: "fc6_U"
}

################################################################################

fc 7

################################################################################
layer {
name: "fc7_L"
type: "InnerProduct"
bottom: "fc6_U"
top: "fc7_L"
param { lr_mult: 1 decay_mult: 1 }
param { lr_mult: 2 decay_mult: 0 }
inner_product_param {
num_output: 128
weight_filler { type: "xavier" std: 0.005 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "fc7_U"
type: "InnerProduct"
bottom: "fc7_L"
top: "fc7_U"
param { lr_mult: 1 decay_mult: 1 }
param { lr_mult: 2 decay_mult: 0 }
inner_product_param {
num_output: 4096
weight_filler { type: "xavier" std: 0.005 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "relu7"
type: "ReLU"
bottom: "fc7_U"
top: "fc7_U"
}

################################################################################

output

################################################################################
layer {
name: "cls_score"
type: "InnerProduct"
bottom: "fc7_U"
top: "cls_score"
param { lr_mult: 1 decay_mult: 1 }
param { lr_mult: 2 decay_mult: 0 }
inner_product_param {
num_output: 21
weight_filler { type: "gaussian" std: 0.01 }
bias_filler { type: "constant" value: 0 }
}
}
layer {
name: "bbox_pred"
type: "InnerProduct"
bottom: "fc7_U"
top: "bbox_pred"
param { lr_mult: 1 decay_mult: 1 }
param { lr_mult: 2 decay_mult: 0 }
inner_product_param {
num_output: 84
weight_filler { type: "gaussian" std: 0.001 }
bias_filler { type: "constant" value: 0 }
}
}
layer {
name: "cls_prob"
type: "Softmax"
bottom: "cls_score"
top: "cls_prob"
loss_param {
ignore_label: -1
normalize: true
}
}
What's the probolem about my propotxt

Mirrors of the models

Hi!
Thank you for your great paper! I found very intrusting that you catch all last ideas (in all levels of deep architecture) and implement it.

I want to run trained model, but there is some problem with dropbox: "This account's links are generating too much traffic and have been temporarily disabled!"

Do you have some mirrors like googleDrive?

Fine-tuning from model trained with MS COCO dataset

Hi,

I'm trying to train models based on the ImageNet pretrained model. To start with, I first trained a model using the settings from example_train_384 (and made the adjustments for 81 classes of coco) with coco_2014_train. The model is trained with 500k iterations and achieves 0.415 [email protected] IoU on coco_2014_minival, which is reasonable for the limited number of iterations.

However, when I try to fine-tune the model on voc_2007_trainval with cls_score and bbox_pred re-initialized, the result is bad (less than 0.20 mAP). I have tried with base_lr set 0.001 or 0.0001, 100k or 200k iterations.

Can please you share how the fine-tuning is done after the model is trained with coco_2014_train (or coco_2014_trainval+voc_2007_trainval+voc_2012_trainval as in the paper)? I understand the model is different and just want to know how the fine-tuning is done correctly.

@sanghoon @kyehyeon

Ask about deconvolution

layer {
name: "upsample"
type: "Deconvolution"
bottom: "conv5_4"
top: "upsample"
param { lr_mult: 0 decay_mult: 0}
convolution_param {
num_output: 384 kernel_size: 4 pad: 1 stride: 2 group: 384
weight_filler: {type: "bilinear" }
bias_term: false
}
}

My question is why the layer's lr_mult is 0?

And If the lr_mult is 0, why not use "reshape" to get upsample.

thanks !

the Small Object detection issue

Dear All,

When i training my own dataset(229 category logo, totally 3w+ images) using PVANet, i got mAP = 0.9672, indeed it is very high. PVANet is indeed so good, the original R-FCN is 0.953(i do a modify it get to 0.9705). by the way, they use different backbone. so it is just a reference. but i find that the small object detection result is not as good as faster-rcnn and R-FCN.

For the Small object, the faster-rcnn get the best result, but its false alarm is also the highest. R-FCN get middle result and the lowest false alarm. PVANet get a worse small object detection result and middle false alarm result.

So, I feel confuse about this result. because of using the idea of HyperNet, the PVANet supposed to be have a good performance in small object detection, but it is not in my dataset...

Anyone please give me some tips about this issue, How can i fix it and get a better result?
@sanghoon
@kyehyeon

About two Propossal Layer

I find the the performance of pva-net not good on small object.So I want to add the second propossal payer on conv3 or other layer in order to detect small object.Does faster-rcnn support two propossla layer?

Finetune

I am trying to finetune pvanet on my data using solver under example_finetune. However I get the following error.
I1201 22:09:59.937469 32013 net.cpp:270] This network produces output rpn_cls_loss
I1201 22:09:59.937480 32013 net.cpp:270] This network produces output rpn_loss_bbox
I1201 22:09:59.937691 32013 net.cpp:283] Network initialization done.
I1201 22:09:59.938531 32013 solver.cpp:60] Solver scaffolding done.
Loading pretrained model weights from data/ImagenetTrainedpvanetComppressed.model
I1201 22:10:00.516289 32013 net.cpp:761] Ignoring source layer data
I1201 22:10:00.516345 32013 net.cpp:761] Ignoring source layer label_data_1_split
I1201 22:10:00.519294 32013 net.cpp:761] Ignoring source layer pool5
F1201 22:10:00.775359 32013 net.cpp:774] Cannot copy param 0 weights from layer 'fc6'; shape mismatch. Source param shape is 4096 13824 (56623104); target param shape is 4096 18432 (75497472). To learn this layer's parameters from scratch rather than copying from a saved net, rename the layer.

Has someone seen this error before? Can someone help me with this. I am using Imagenet pretrained model for initial weights.

Thanks

Visualization training

Hello
I have strange results when trying to visualize training process:
fast_rcnn/train.py

rois = net.blobs['rois'].data
bbox_targets = net.blobs['bbox_targets'].data
boxes = rois[:, 1:5]

tr_boxes = bbox_transform_inv(boxes, bbox_targets)
tr_boxes = clip_boxes(tr_boxes, im.shape)
labels=labels.astype(dtype=int)
dets_tr=[]
    for k,cls_ind in enumerate(labels):
        if cls_ind!=0:
            cls_boxes = tr_boxes[k, 4 * cls_ind:4 * (cls_ind + 1)]
            #cls_boxes = boxes[k]
            dets_tr.append([cls_boxes[0], cls_boxes[1], cls_boxes[2], cls_boxes[3], cls_ind])

dets_tr=np.array(dets_tr,dtype=int)

I try to visualize targets of Fast-RCNN
Red it is target (bbox_targets). I hope target boxes restore GT box from ROI boxes.
Blue is answer (bbox_pred) of pvanet/comp/test.model

Second image is train from imagenet (finetune)

tmp2
tmp

PVANet Lite version

Thx for update. I notice you have removed PVANet Lite version. So can you provide pretrained classification PVA-lite model on ImageNet? Since the highly efficient of Lite version is so appealing.

Fail to convert full/original.model to full/test.model

Hi

I am trying to use the method in #5 to merge conv, bn, scale layers in original.model to a single conv layer in test.model.

But I failed. The file I obtained is different from full/test.model

And I check the full/original.model
find that there are many zeros in original.model, which causes the failure.

So could anyone please tell me that if it is possible to do such a conversion by that method?
Is there another trick in the conversion?

Which pretrained model is better if I finetune it on new dataset?

I finetune the pvanet/full/test.model using the train.prototxt in pvanet/example_fineturn on KITTI while the result is not good and even worse than the pvanet/full/test.model. Should I use imagenet/full/test.model as pre-train model if I want to train on a dataset with different classes with VOC?
Can you please give me some advice? Thank you very much.
#27

How should I do to train a compressed or little model on my own data?

I wanna train a compressed or little model on my own data whose annotation format is the same as voc dataset but has different classes with VOC,i just use the command:"./tools/train_net.py --gpu 0 --solver models/pvanet/example_finetune/solver.prototxt --weights models/pvanet/comp/test.model –iters 100000 --cfg models/pvanet/cfgs/train.yml –imdb voc_2007_trainval", is it a correct way? I found out that the performance of the detection with this model is so bad.The finetuned model using "models/pvanet/full/test.model" has almost the same bad performance that has so many false detections .

How should I do to train a compressed or little model on my own data?

Please give me some detailed advice,thanks so much!Looking forward to your reply!

proposal layer issue

the new version model : pvanet.the proposal layer,why you go back to the python version?is there some misgivings? thanks~

Does it need fixed image size with 640x1056

I test it with my own data with 640x1000. It give following error:

I1020 22:34:11.216954 7221 net.cpp:91] Creating Layer conv4_1/incep
I1020 22:34:11.216960 7221 net.cpp:425] conv4_1/incep <- conv4_1/incep/0
I1020 22:34:11.216967 7221 net.cpp:425] conv4_1/incep <- conv4_1/incep/1_0
I1020 22:34:11.216974 7221 net.cpp:425] conv4_1/incep <- conv4_1/incep/2_1
I1020 22:34:11.216979 7221 net.cpp:425] conv4_1/incep <- conv4_1/incep/poolproj
I1020 22:34:11.216986 7221 net.cpp:399] conv4_1/incep -> conv4_1/incep
F1020 22:34:11.217005 7221 concat_layer.cpp:42] Check failed: top_shape[j] == bottom[i]->shape(j) (63 vs. 62) All inputs must
have the same shape, except at concat_axis.
*** Check failure stack trace: ***
Aborted

The conv4_1/incep/poolproj use max Pooling layer, The feature map size is round with floor get 40x62. Other convolution layers with ceil mode get 40x63.

So Is the image size must resize to 640x1056

training pvanet in multiple gpu?

Is it possible to train pvanet in a single machine with multiple gpus?

According to "rbgirshick#107", python layer makes it not possible to use multiple gpu.

However, this paper "Feature Pyramid Networks for Object Detection" (https://arxiv.org/abs/1612.03144) suggests that RPN framework can be trained using synchronized SGD training on 8 GPUs (using caffe2)

I am currently woking on COCO dataset for object detection with pvanet. Any hint to speed training using multiple GPU machine? Or will training on GPU cluster helps?

repeat experment for "example_train_384"

I want to verify my setup is correct or not. I intend to follow :
https://github.com/sanghoon/pva-faster-rcnn/tree/master/models/pvanet/example_train_384,

Training for 100k iterations (toy)
tools/train_net.py
--gpu 0
--solver models/pvanet/example_train_384/solver.prototxt
--weights models/pvanet/imagenet/original.model
--iters 100000
--cfg models/pvanet/cfgs/train.yml
--imdb voc_2007_trainval

For the 100k iterations (toy) above, what results do I get if I test on imdb voc_2007_test?

'' It is mentioned that this is not the same as the paper, so i do not have reference results for checking.
'' If anyone can share the results or train log files, it would be nice. Thanks!

Why below "**post_nms_topn**" is set to 200?

Dear All:

When i read models/pvanet/example_train_384_logo229/train.prototxt, i find that below "post_nms_topn" is set to 200, as i know, during training phase, the rpn network usually generate 2000 proposals per image, and in the inference phase, it will just generate about 200 or 300 proposals in order to save time.

So i feel confused that during training phase, Why below "post_nms_topn" is set to 200?

C++ implementation of the proposal layer

layer {
name: 'proposal'
type: 'Proposal'
bottom: 'rpn_cls_prob_reshape'
bottom: 'rpn_bbox_pred'
bottom: 'im_info'
top: 'rpn_rois'
top: 'rpn_scores'
proposal_param {
ratio: 0.5 ratio: 0.667 ratio: 1.0 ratio: 1.5 ratio: 2.0
scale: 3 scale: 6 scale: 9 scale: 16 scale: 32
base_size: 16
feat_stride: 16
pre_nms_topn: 12000
post_nms_topn: 200 <--------------Why here post_nms_topn is 200 ?
nms_thresh: 0.7
min_size: 16
}
}

The loss is nan in the lite model finetune with test_690K.model

I used the test_690K.model and original.pt in the lite model for fine-tuning. There are only two categories for training,The result is :
I1128 09:50:39.461268 20811 net.cpp:608] [Forward] Layer input-data, top blob data data: 52.5027
I1128 09:50:39.461340 20811 net.cpp:608] [Forward] Layer input-data, top blob im_info data: 321.144
I1128 09:50:39.461350 20811 net.cpp:608] [Forward] Layer input-data, top blob gt_boxes data: 401.691
I1128 09:50:39.462455 20811 net.cpp:608] [Forward] Layer data_input-data_0_split, top blob data_input-data_0_split_0 data: 52.5027
I1128 09:50:39.463603 20811 net.cpp:608] [Forward] Layer data_input-data_0_split, top blob data_input-data_0_split_1 data: 52.5027
I1128 09:50:39.463634 20811 net.cpp:608] [Forward] Layer im_info_input-data_1_split, top blob im_info_input-data_1_split_0 data: 321.144
I1128 09:50:39.463642 20811 net.cpp:608] [Forward] Layer im_info_input-data_1_split, top blob im_info_input-data_1_split_1 data: 321.144
I1128 09:50:39.463652 20811 net.cpp:608] [Forward] Layer gt_boxes_input-data_2_split, top blob gt_boxes_input-data_2_split_0 data: 401.691
I1128 09:50:39.463660 20811 net.cpp:608] [Forward] Layer gt_boxes_input-data_2_split, top blob gt_boxes_input-data_2_split_1 data: 401.691
I1128 09:50:39.469156 20811 net.cpp:608] [Forward] Layer conv1, top blob conv1 data: 1.59595
I1128 09:50:39.469223 20811 net.cpp:620] [Forward] Layer conv1, param blob 0 data: 0.00404756
I1128 09:50:39.469260 20811 net.cpp:620] [Forward] Layer conv1, param blob 1 data: 1.39765
I1128 09:50:39.472007 20811 net.cpp:608] [Forward] Layer conv1/bn, top blob conv1 data: 504.685
I1128 09:50:39.472070 20811 net.cpp:620] [Forward] Layer conv1/bn, param blob 0 data: 0
I1128 09:50:39.472110 20811 net.cpp:620] [Forward] Layer conv1/bn, param blob 1 data: 0
I1128 09:50:39.472121 20811 net.cpp:620] [Forward] Layer conv1/bn, param blob 2 data: 0
I1128 09:50:39.473464 20811 net.cpp:608] [Forward] Layer conv1/scale, top blob conv1 data: 504.685
I1128 09:50:39.473526 20811 net.cpp:620] [Forward] Layer conv1/scale, param blob 0 data: 1
I1128 09:50:39.473559 20811 net.cpp:620] [Forward] Layer conv1/scale, param blob 1 data: 0
I1128 09:50:39.474113 20811 net.cpp:608] [Forward] Layer relu1, top blob conv1 data: 233.725
I1128 09:50:39.476871 20811 net.cpp:608] [Forward] Layer conv2, top blob conv2 data: 531.794
I1128 09:50:39.476934 20811 net.cpp:620] [Forward] Layer conv2, param blob 0 data: 0.0443382
I1128 09:50:39.476970 20811 net.cpp:620] [Forward] Layer conv2, param blob 1 data: 1.43271
I1128 09:50:39.478186 20811 net.cpp:608] [Forward] Layer conv2/bn, top blob conv2 data: 168168
I1128 09:50:39.478248 20811 net.cpp:620] [Forward] Layer conv2/bn, param blob 0 data: 0
I1128 09:50:39.478283 20811 net.cpp:620] [Forward] Layer conv2/bn, param blob 1 data: 0
I1128 09:50:39.478294 20811 net.cpp:620] [Forward] Layer conv2/bn, param blob 2 data: 0
I1128 09:50:39.479002 20811 net.cpp:608] [Forward] Layer conv2/scale, top blob conv2 data: 168168
I1128 09:50:39.479048 20811 net.cpp:620] [Forward] Layer conv2/scale, param blob 0 data: 1
I1128 09:50:39.479082 20811 net.cpp:620] [Forward] Layer conv2/scale, param blob 1 data: 0
I1128 09:50:39.479312 20811 net.cpp:608] [Forward] Layer relu2, top blob conv2 data: 43591.3
I1128 09:50:39.480777 20811 net.cpp:608] [Forward] Layer conv3, top blob conv3 data: 87148.2
I1128 09:50:39.480840 20811 net.cpp:620] [Forward] Layer conv3, param blob 0 data: 0.0234285
I1128 09:50:39.480875 20811 net.cpp:620] [Forward] Layer conv3, param blob 1 data: 0.90888
I1128 09:50:39.481591 20811 net.cpp:608] [Forward] Layer conv3/bn, top blob conv3 data: 2.75587e+07
I1128 09:50:39.481639 20811 net.cpp:620] [Forward] Layer conv3/bn, param blob 0 data: 0
I1128 09:50:39.481676 20811 net.cpp:620] [Forward] Layer conv3/bn, param blob 1 data: 0
I1128 09:50:39.481688 20811 net.cpp:620] [Forward] Layer conv3/bn, param blob 2 data: 0
I1128 09:50:39.482126 20811 net.cpp:608] [Forward] Layer conv3/scale, top blob conv3 data: 2.75587e+07
I1128 09:50:39.482187 20811 net.cpp:620] [Forward] Layer conv3/scale, param blob 0 data: 1
I1128 09:50:39.482223 20811 net.cpp:620] [Forward] Layer conv3/scale, param blob 1 data: 0
I1128 09:50:39.482357 20811 net.cpp:608] [Forward] Layer relu3, top blob conv3 data: 5.31553e+06
I1128 09:50:39.482434 20811 net.cpp:608] [Forward] Layer conv3_relu3_0_split, top blob conv3_relu3_0_split_0 data: 5.31553e+06
I1128 09:50:39.482506 20811 net.cpp:608] [Forward] Layer conv3_relu3_0_split, top blob conv3_relu3_0_split_1 data: 5.31553e+06
I1128 09:50:39.482578 20811 net.cpp:608] [Forward] Layer conv3_relu3_0_split, top blob conv3_relu3_0_split_2 data: 5.31553e+06
I1128 09:50:39.482647 20811 net.cpp:608] [Forward] Layer conv3_relu3_0_split, top blob conv3_relu3_0_split_3 data: 5.31553e+06
I1128 09:50:39.483050 20811 net.cpp:608] [Forward] Layer inc3a/pool1, top blob inc3a/pool1 data: 1.16056e+07
I1128 09:50:39.483391 20811 net.cpp:608] [Forward] Layer inc3a/conv1, top blob inc3a/conv1 data: 2.59158e+07
I1128 09:50:39.483438 20811 net.cpp:620] [Forward] Layer inc3a/conv1, param blob 0 data: 0.0797974
I1128 09:50:39.483474 20811 net.cpp:620] [Forward] Layer inc3a/conv1, param blob 1 data: 1.80226
I1128 09:50:39.484019 20811 net.cpp:608] [Forward] Layer inc3a/conv1/bn, top blob inc3a/conv1 data: 8.19529e+09
I1128 09:50:39.484066 20811 net.cpp:620] [Forward] Layer inc3a/conv1/bn, param blob 0 data: 0
I1128 09:50:39.484105 20811 net.cpp:620] [Forward] Layer inc3a/conv1/bn, param blob 1 data: 0
I1128 09:50:39.484134 20811 net.cpp:620] [Forward] Layer inc3a/conv1/bn, param blob 2 data: 0
I1128 09:50:39.484411 20811 net.cpp:608] [Forward] Layer inc3a/conv1/scale, top blob inc3a/conv1 data: 8.19529e+09
I1128 09:50:39.484516 20811 net.cpp:620] [Forward] Layer inc3a/conv1/scale, param blob 0 data: 1
I1128 09:50:39.484551 20811 net.cpp:620] [Forward] Layer inc3a/conv1/scale, param blob 1 data: 0
I1128 09:50:39.484599 20811 net.cpp:608] [Forward] Layer inc3a/relu1, top blob inc3a/conv1 data: 2.12662e+09
I1128 09:50:39.484992 20811 net.cpp:608] [Forward] Layer inc3a/conv3_1, top blob inc3a/conv3_1 data: 1.63556e+07
I1128 09:50:39.485038 20811 net.cpp:620] [Forward] Layer inc3a/conv3_1, param blob 0 data: 0.0775309
I1128 09:50:39.485074 20811 net.cpp:620] [Forward] Layer inc3a/conv3_1, param blob 1 data: 1.72792
I1128 09:50:39.485589 20811 net.cpp:608] [Forward] Layer inc3a/conv3_1/bn, top blob inc3a/conv3_1 data: 5.17211e+09
I1128 09:50:39.485651 20811 net.cpp:620] [Forward] Layer inc3a/conv3_1/bn, param blob 0 data: 0
I1128 09:50:39.485685 20811 net.cpp:620] [Forward] Layer inc3a/conv3_1/bn, param blob 1 data: 0
I1128 09:50:39.485697 20811 net.cpp:620] [Forward] Layer inc3a/conv3_1/bn, param blob 2 data: 0
I1128 09:50:39.485935 20811 net.cpp:608] [Forward] Layer inc3a/conv3_1/scale, top blob inc3a/conv3_1 data: 5.17211e+09
I1128 09:50:39.485980 20811 net.cpp:620] [Forward] Layer inc3a/conv3_1/scale, param blob 0 data: 1
I1128 09:50:39.486016 20811 net.cpp:620] [Forward] Layer inc3a/conv3_1/scale, param blob 1 data: 0
I1128 09:50:39.486063 20811 net.cpp:608] [Forward] Layer inc3a/relu3_1, top blob inc3a/conv3_1 data: 3.01208e+09
I1128 09:50:39.486531 20811 net.cpp:608] [Forward] Layer inc3a/conv3_2, top blob inc3a/conv3_2 data: 4.763e+09
I1128 09:50:39.486577 20811 net.cpp:620] [Forward] Layer inc3a/conv3_2, param blob 0 data: 0.0346724
I1128 09:50:39.486614 20811 net.cpp:620] [Forward] Layer inc3a/conv3_2, param blob 1 data: 1.1992
I1128 09:50:39.487105 20811 net.cpp:608] [Forward] Layer inc3a/conv3_2/bn, top blob inc3a/conv3_2 data: 1.50619e+12
I1128 09:50:39.487152 20811 net.cpp:620] [Forward] Layer inc3a/conv3_2/bn, param blob 0 data: 0
I1128 09:50:39.487187 20811 net.cpp:620] [Forward] Layer inc3a/conv3_2/bn, param blob 1 data: 0
I1128 09:50:39.487200 20811 net.cpp:620] [Forward] Layer inc3a/conv3_2/bn, param blob 2 data: 0
I1128 09:50:39.487448 20811 net.cpp:608] [Forward] Layer inc3a/conv3_2/scale, top blob inc3a/conv3_2 data: 1.50619e+12
I1128 09:50:39.487493 20811 net.cpp:620] [Forward] Layer inc3a/conv3_2/scale, param blob 0 data: 1
I1128 09:50:39.487529 20811 net.cpp:620] [Forward] Layer inc3a/conv3_2/scale, param blob 1 data: 0
I1128 09:50:39.487576 20811 net.cpp:608] [Forward] Layer inc3a/relu3_2, top blob inc3a/conv3_2 data: 9.85161e+11
I1128 09:50:39.487929 20811 net.cpp:608] [Forward] Layer inc3a/conv5_1, top blob inc3a/conv5_1 data: 1.93234e+07
I1128 09:50:39.487977 20811 net.cpp:620] [Forward] Layer inc3a/conv5_1, param blob 0 data: 0.133948
I1128 09:50:39.488013 20811 net.cpp:620] [Forward] Layer inc3a/conv5_1, param blob 1 data: 1.39471
I1128 09:50:39.488543 20811 net.cpp:608] [Forward] Layer inc3a/conv5_1/bn, top blob inc3a/conv5_1 data: 6.11059e+09
I1128 09:50:39.488591 20811 net.cpp:620] [Forward] Layer inc3a/conv5_1/bn, param blob 0 data: 0
I1128 09:50:39.488627 20811 net.cpp:620] [Forward] Layer inc3a/conv5_1/bn, param blob 1 data: 0
I1128 09:50:39.488639 20811 net.cpp:620] [Forward] Layer inc3a/conv5_1/bn, param blob 2 data: 0
I1128 09:50:39.488883 20811 net.cpp:608] [Forward] Layer inc3a/conv5_1/scale, top blob inc3a/conv5_1 data: 6.11059e+09
I1128 09:50:39.488927 20811 net.cpp:620] [Forward] Layer inc3a/conv5_1/scale, param blob 0 data: 1
I1128 09:50:39.488962 20811 net.cpp:620] [Forward] Layer inc3a/conv5_1/scale, param blob 1 data: 0
I1128 09:50:39.489009 20811 net.cpp:608] [Forward] Layer inc3a/relu5_1, top blob inc3a/conv5_1 data: 2.17753e+09
I1128 09:50:39.489634 20811 net.cpp:608] [Forward] Layer inc3a/conv5_2, top blob inc3a/conv5_2 data: 4.70098e+09
I1128 09:50:39.489682 20811 net.cpp:620] [Forward] Layer inc3a/conv5_2, param blob 0 data: 0.0970994
I1128 09:50:39.489717 20811 net.cpp:620] [Forward] Layer inc3a/conv5_2, param blob 1 data: 0.49631
I1128 09:50:39.490288 20811 net.cpp:608] [Forward] Layer inc3a/conv5_2/bn, top blob inc3a/conv5_2 data: 1.48658e+12
I1128 09:50:39.490334 20811 net.cpp:620] [Forward] Layer inc3a/conv5_2/bn, param blob 0 data: 0
I1128 09:50:39.490370 20811 net.cpp:620] [Forward] Layer inc3a/conv5_2/bn, param blob 1 data: 0
I1128 09:50:39.490381 20811 net.cpp:620] [Forward] Layer inc3a/conv5_2/bn, param blob 2 data: 0
I1128 09:50:39.490669 20811 net.cpp:608] [Forward] Layer inc3a/conv5_2/scale, top blob inc3a/conv5_2 data: 1.48658e+12
I1128 09:50:39.490743 20811 net.cpp:620] [Forward] Layer inc3a/conv5_2/scale, param blob 0 data: 1
I1128 09:50:39.490777 20811 net.cpp:620] [Forward] Layer inc3a/conv5_2/scale, param blob 1 data: 0
I1128 09:50:39.490828 20811 net.cpp:608] [Forward] Layer inc3a/relu5_2, top blob inc3a/conv5_2 data: 1.82903e+11
I1128 09:50:39.491261 20811 net.cpp:608] [Forward] Layer inc3a/conv5_3, top blob inc3a/conv5_3 data: 7.66628e+11
I1128 09:50:39.491307 20811 net.cpp:620] [Forward] Layer inc3a/conv5_3, param blob 0 data: 0.0626571
I1128 09:50:39.491343 20811 net.cpp:620] [Forward] Layer inc3a/conv5_3, param blob 1 data: 0.60695
I1128 09:50:39.491828 20811 net.cpp:608] [Forward] Layer inc3a/conv5_3/bn, top blob inc3a/conv5_3 data: 2.42429e+14
I1128 09:50:39.491875 20811 net.cpp:620] [Forward] Layer inc3a/conv5_3/bn, param blob 0 data: 0
I1128 09:50:39.491911 20811 net.cpp:620] [Forward] Layer inc3a/conv5_3/bn, param blob 1 data: 0
I1128 09:50:39.491924 20811 net.cpp:620] [Forward] Layer inc3a/conv5_3/bn, param blob 2 data: 0
I1128 09:50:39.492168 20811 net.cpp:608] [Forward] Layer inc3a/conv5_3/scale, top blob inc3a/conv5_3 data: 2.42429e+14
I1128 09:50:39.492213 20811 net.cpp:620] [Forward] Layer inc3a/conv5_3/scale, param blob 0 data: 1
I1128 09:50:39.492249 20811 net.cpp:620] [Forward] Layer inc3a/conv5_3/scale, param blob 1 data: 0
I1128 09:50:39.492292 20811 net.cpp:608] [Forward] Layer inc3a/relu5_3, top blob inc3a/conv5_3 data: 1.12693e+14
I1128 09:50:39.492569 20811 net.cpp:608] [Forward] Layer inc3a, top blob inc3a data: 1.91116e+13
I1128 09:50:39.492635 20811 net.cpp:608] [Forward] Layer inc3a_inc3a_0_split, top blob inc3a_inc3a_0_split_0 data: 1.91116e+13
I1128 09:50:39.492688 20811 net.cpp:608] [Forward] Layer inc3a_inc3a_0_split, top blob inc3a_inc3a_0_split_1 data: 1.91116e+13
I1128 09:50:39.492743 20811 net.cpp:608] [Forward] Layer inc3a_inc3a_0_split, top blob inc3a_inc3a_0_split_2 data: 1.91116e+13
I1128 09:50:39.493124 20811 net.cpp:608] [Forward] Layer inc3b/conv1, top blob inc3b/conv1 data: 1.63891e+14
I1128 09:50:39.493171 20811 net.cpp:620] [Forward] Layer inc3b/conv1, param blob 0 data: 0.0742018
I1128 09:50:39.493208 20811 net.cpp:620] [Forward] Layer inc3b/conv1, param blob 1 data: 1.27397
I1128 09:50:39.493736 20811 net.cpp:608] [Forward] Layer inc3b/conv1/bn, top blob inc3b/conv1 data: 5.1827e+16
I1128 09:50:39.493782 20811 net.cpp:620] [Forward] Layer inc3b/conv1/bn, param blob 0 data: 0
I1128 09:50:39.493818 20811 net.cpp:620] [Forward] Layer inc3b/conv1/bn, param blob 1 data: 0
I1128 09:50:39.493831 20811 net.cpp:620] [Forward] Layer inc3b/conv1/bn, param blob 2 data: 0
I1128 09:50:39.494108 20811 net.cpp:608] [Forward] Layer inc3b/conv1/scale, top blob inc3b/conv1 data: 5.1827e+16
I1128 09:50:39.494182 20811 net.cpp:620] [Forward] Layer inc3b/conv1/scale, param blob 0 data: 1
I1128 09:50:39.494216 20811 net.cpp:620] [Forward] Layer inc3b/conv1/scale, param blob 1 data: 0
I1128 09:50:39.494264 20811 net.cpp:608] [Forward] Layer inc3b/relu1, top blob inc3b/conv1 data: 2.39785e+16
I1128 09:50:39.494468 20811 net.cpp:608] [Forward] Layer inc3b/conv3_1, top blob inc3b/conv3_1 data: 2.14699e+14
I1128 09:50:39.494511 20811 net.cpp:620] [Forward] Layer inc3b/conv3_1, param blob 0 data: 0.0790749
I1128 09:50:39.494546 20811 net.cpp:620] [Forward] Layer inc3b/conv3_1, param blob 1 data: 1.53656
I1128 09:50:39.494806 20811 net.cpp:608] [Forward] Layer inc3b/conv3_1/bn, top blob inc3b/conv3_1 data: 6.78939e+16
I1128 09:50:39.494845 20811 net.cpp:620] [Forward] Layer inc3b/conv3_1/bn, param blob 0 data: 0
I1128 09:50:39.494880 20811 net.cpp:620] [Forward] Layer inc3b/conv3_1/bn, param blob 1 data: 0
I1128 09:50:39.494892 20811 net.cpp:620] [Forward] Layer inc3b/conv3_1/bn, param blob 2 data: 0
I1128 09:50:39.495128 20811 net.cpp:608] [Forward] Layer inc3b/conv3_1/scale, top blob inc3b/conv3_1 data: 6.78939e+16
I1128 09:50:39.495174 20811 net.cpp:620] [Forward] Layer inc3b/conv3_1/scale, param blob 0 data: 1
I1128 09:50:39.495208 20811 net.cpp:620] [Forward] Layer inc3b/conv3_1/scale, param blob 1 data: 0
I1128 09:50:39.495250 20811 net.cpp:608] [Forward] Layer inc3b/relu3_1, top blob inc3b/conv3_1 data: 3.83192e+16
I1128 09:50:39.495718 20811 net.cpp:608] [Forward] Layer inc3b/conv3_2, top blob inc3b/conv3_2 data: 5.82418e+16
I1128 09:50:39.495765 20811 net.cpp:620] [Forward] Layer inc3b/conv3_2, param blob 0 data: 0.0763123
I1128 09:50:39.495802 20811 net.cpp:620] [Forward] Layer inc3b/conv3_2, param blob 1 data: 0.679592
I1128 09:50:39.496307 20811 net.cpp:608] [Forward] Layer inc3b/conv3_2/bn, top blob inc3b/conv3_2 data: 1.84177e+19
I1128 09:50:39.496353 20811 net.cpp:620] [Forward] Layer inc3b/conv3_2/bn, param blob 0 data: 0
I1128 09:50:39.496392 20811 net.cpp:620] [Forward] Layer inc3b/conv3_2/bn, param blob 1 data: 0
I1128 09:50:39.496403 20811 net.cpp:620] [Forward] Layer inc3b/conv3_2/bn, param blob 2 data: 0
I1128 09:50:39.496657 20811 net.cpp:608] [Forward] Layer inc3b/conv3_2/scale, top blob inc3b/conv3_2 data: 1.84177e+19
I1128 09:50:39.496703 20811 net.cpp:620] [Forward] Layer inc3b/conv3_2/scale, param blob 0 data: 1
I1128 09:50:39.496738 20811 net.cpp:620] [Forward] Layer inc3b/conv3_2/scale, param blob 1 data: 0
I1128 09:50:39.496785 20811 net.cpp:608] [Forward] Layer inc3b/relu3_2, top blob inc3b/conv3_2 data: 7.18377e+18
I1128 09:50:39.496997 20811 net.cpp:608] [Forward] Layer inc3b/conv5_1, top blob inc3b/conv5_1 data: 1.04543e+14
I1128 09:50:39.497040 20811 net.cpp:620] [Forward] Layer inc3b/conv5_1, param blob 0 data: 0.0664149
I1128 09:50:39.497076 20811 net.cpp:620] [Forward] Layer inc3b/conv5_1, param blob 1 data: 1.1648
I1128 09:50:39.497462 20811 net.cpp:608] [Forward] Layer inc3b/conv5_1/bn, top blob inc3b/conv5_1 data: 3.30594e+16
I1128 09:50:39.497509 20811 net.cpp:620] [Forward] Layer inc3b/conv5_1/bn, param blob 0 data: 0
I1128 09:50:39.497545 20811 net.cpp:620] [Forward] Layer inc3b/conv5_1/bn, param blob 1 data: 0
I1128 09:50:39.497556 20811 net.cpp:620] [Forward] Layer inc3b/conv5_1/bn, param blob 2 data: 0
I1128 09:50:39.497669 20811 net.cpp:608] [Forward] Layer inc3b/conv5_1/scale, top blob inc3b/conv5_1 data: 3.30594e+16
I1128 09:50:39.497709 20811 net.cpp:620] [Forward] Layer inc3b/conv5_1/scale, param blob 0 data: 1
I1128 09:50:39.497743 20811 net.cpp:620] [Forward] Layer inc3b/conv5_1/scale, param blob 1 data: 0
I1128 09:50:39.497786 20811 net.cpp:608] [Forward] Layer inc3b/relu5_1, top blob inc3b/conv5_1 data: 1.83535e+16
I1128 09:50:39.498244 20811 net.cpp:608] [Forward] Layer inc3b/conv5_2, top blob inc3b/conv5_2 data: 3.01564e+16
I1128 09:50:39.498292 20811 net.cpp:620] [Forward] Layer inc3b/conv5_2, param blob 0 data: 0.0859597
I1128 09:50:39.498327 20811 net.cpp:620] [Forward] Layer inc3b/conv5_2, param blob 1 data: 1.0229
I1128 09:50:39.498819 20811 net.cpp:608] [Forward] Layer inc3b/conv5_2/bn, top blob inc3b/conv5_2 data: 9.53629e+18
I1128 09:50:39.498865 20811 net.cpp:620] [Forward] Layer inc3b/conv5_2/bn, param blob 0 data: 0
I1128 09:50:39.498900 20811 net.cpp:620] [Forward] Layer inc3b/conv5_2/bn, param blob 1 data: 0
I1128 09:50:39.498914 20811 net.cpp:620] [Forward] Layer inc3b/conv5_2/bn, param blob 2 data: 0
I1128 09:50:39.499155 20811 net.cpp:608] [Forward] Layer inc3b/conv5_2/scale, top blob inc3b/conv5_2 data: 9.53629e+18
I1128 09:50:39.499198 20811 net.cpp:620] [Forward] Layer inc3b/conv5_2/scale, param blob 0 data: 1
I1128 09:50:39.499233 20811 net.cpp:620] [Forward] Layer inc3b/conv5_2/scale, param blob 1 data: 0
I1128 09:50:39.499276 20811 net.cpp:608] [Forward] Layer inc3b/relu5_2, top blob inc3b/conv5_2 data: 5.75048e+18
I1128 09:50:39.499785 20811 net.cpp:608] [Forward] Layer inc3b/conv5_3, top blob inc3b/conv5_3 data: 1.45809e+19
I1128 09:50:39.499832 20811 net.cpp:620] [Forward] Layer inc3b/conv5_3, param blob 0 data: 0.0530213
I1128 09:50:39.499867 20811 net.cpp:620] [Forward] Layer inc3b/conv5_3, param blob 1 data: 0.812041
I1128 09:50:39.500371 20811 net.cpp:608] [Forward] Layer inc3b/conv5_3/bn, top blob inc3b/conv5_3 data: 4.61089e+21
I1128 09:50:39.500418 20811 net.cpp:620] [Forward] Layer inc3b/conv5_3/bn, param blob 0 data: 0
I1128 09:50:39.500453 20811 net.cpp:620] [Forward] Layer inc3b/conv5_3/bn, param blob 1 data: 0
I1128 09:50:39.500465 20811 net.cpp:620] [Forward] Layer inc3b/conv5_3/bn, param blob 2 data: 0
I1128 09:50:39.500705 20811 net.cpp:608] [Forward] Layer inc3b/conv5_3/scale, top blob inc3b/conv5_3 data: 4.61089e+21
I1128 09:50:39.500749 20811 net.cpp:620] [Forward] Layer inc3b/conv5_3/scale, param blob 0 data: 1
I1128 09:50:39.500784 20811 net.cpp:620] [Forward] Layer inc3b/conv5_3/scale, param blob 1 data: 0
I1128 09:50:39.500828 20811 net.cpp:608] [Forward] Layer inc3b/relu5_3, top blob inc3b/conv5_3 data: 1.38851e+21
I1128 09:50:39.501124 20811 net.cpp:608] [Forward] Layer inc3b, top blob inc3b data: 2.33825e+20
I1128 09:50:39.501190 20811 net.cpp:608] [Forward] Layer inc3b_inc3b_0_split, top blob inc3b_inc3b_0_split_0 data: 2.33825e+20
I1128 09:50:39.501245 20811 net.cpp:608] [Forward] Layer inc3b_inc3b_0_split, top blob inc3b_inc3b_0_split_1 data: 2.33825e+20
I1128 09:50:39.501302 20811 net.cpp:608] [Forward] Layer inc3b_inc3b_0_split, top blob inc3b_inc3b_0_split_2 data: 2.33825e+20
I1128 09:50:39.501682 20811 net.cpp:608] [Forward] Layer inc3c/conv1, top blob inc3c/conv1 data: 1.02943e+21
I1128 09:50:39.501729 20811 net.cpp:620] [Forward] Layer inc3c/conv1, param blob 0 data: 0.0705286
I1128 09:50:39.501765 20811 net.cpp:620] [Forward] Layer inc3c/conv1, param blob 1 data: 0.92137
I1128 09:50:39.502323 20811 net.cpp:608] [Forward] Layer inc3c/conv1/bn, top blob inc3c/conv1 data: 3.25534e+23
I1128 09:50:39.502385 20811 net.cpp:620] [Forward] Layer inc3c/conv1/bn, param blob 0 data: 0
I1128 09:50:39.502420 20811 net.cpp:620] [Forward] Layer inc3c/conv1/bn, param blob 1 data: 0
I1128 09:50:39.502435 20811 net.cpp:620] [Forward] Layer inc3c/conv1/bn, param blob 2 data: 0
I1128 09:50:39.502715 20811 net.cpp:608] [Forward] Layer inc3c/conv1/scale, top blob inc3c/conv1 data: 3.25534e+23
I1128 09:50:39.502760 20811 net.cpp:620] [Forward] Layer inc3c/conv1/scale, param blob 0 data: 1
I1128 09:50:39.502796 20811 net.cpp:620] [Forward] Layer inc3c/conv1/scale, param blob 1 data: 0
I1128 09:50:39.502846 20811 net.cpp:608] [Forward] Layer inc3c/relu1, top blob inc3c/conv1 data: 1.26367e+23
I1128 09:50:39.503051 20811 net.cpp:608] [Forward] Layer inc3c/conv3_1, top blob inc3c/conv3_1 data: 1.01542e+21
I1128 09:50:39.503093 20811 net.cpp:620] [Forward] Layer inc3c/conv3_1, param blob 0 data: 0.071089
I1128 09:50:39.503128 20811 net.cpp:620] [Forward] Layer inc3c/conv3_1, param blob 1 data: 1.13364
I1128 09:50:39.503392 20811 net.cpp:608] [Forward] Layer inc3c/conv3_1/bn, top blob inc3c/conv3_1 data: 3.21104e+23
I1128 09:50:39.503434 20811 net.cpp:620] [Forward] Layer inc3c/conv3_1/bn, param blob 0 data: 0
I1128 09:50:39.503468 20811 net.cpp:620] [Forward] Layer inc3c/conv3_1/bn, param blob 1 data: 0
I1128 09:50:39.503481 20811 net.cpp:620] [Forward] Layer inc3c/conv3_1/bn, param blob 2 data: 0
I1128 09:50:39.503595 20811 net.cpp:608] [Forward] Layer inc3c/conv3_1/scale, top blob inc3c/conv3_1 data: 3.21104e+23
I1128 09:50:39.503635 20811 net.cpp:620] [Forward] Layer inc3c/conv3_1/scale, param blob 0 data: 1
I1128 09:50:39.503669 20811 net.cpp:620] [Forward] Layer inc3c/conv3_1/scale, param blob 1 data: 0
I1128 09:50:39.503711 20811 net.cpp:608] [Forward] Layer inc3c/relu3_1, top blob inc3c/conv3_1 data: 2.5443e+23
I1128 09:50:39.504195 20811 net.cpp:608] [Forward] Layer inc3c/conv3_2, top blob inc3c/conv3_2 data: 3.76861e+23
I1128 09:50:39.504243 20811 net.cpp:620] [Forward] Layer inc3c/conv3_2, param blob 0 data: 0.0923302
I1128 09:50:39.504279 20811 net.cpp:620] [Forward] Layer inc3c/conv3_2, param blob 1 data: 1.14581
I1128 09:50:39.504827 20811 net.cpp:608] [Forward] Layer inc3c/conv3_2/bn, top blob inc3c/conv3_2 data: 1.19174e+26
I1128 09:50:39.504875 20811 net.cpp:620] [Forward] Layer inc3c/conv3_2/bn, param blob 0 data: 0
I1128 09:50:39.504910 20811 net.cpp:620] [Forward] Layer inc3c/conv3_2/bn, param blob 1 data: 0
I1128 09:50:39.504923 20811 net.cpp:620] [Forward] Layer inc3c/conv3_2/bn, param blob 2 data: 0
I1128 09:50:39.505183 20811 net.cpp:608] [Forward] Layer inc3c/conv3_2/scale, top blob inc3c/conv3_2 data: 1.19174e+26
I1128 09:50:39.505228 20811 net.cpp:620] [Forward] Layer inc3c/conv3_2/scale, param blob 0 data: 1
I1128 09:50:39.505264 20811 net.cpp:620] [Forward] Layer inc3c/conv3_2/scale, param blob 1 data: 0
I1128 09:50:39.505316 20811 net.cpp:608] [Forward] Layer inc3c/relu3_2, top blob inc3c/conv3_2 data: 4.53115e+25
I1128 09:50:39.505525 20811 net.cpp:608] [Forward] Layer inc3c/conv5_1, top blob inc3c/conv5_1 data: 1.02652e+21
I1128 09:50:39.505568 20811 net.cpp:620] [Forward] Layer inc3c/conv5_1, param blob 0 data: 0.0611247
I1128 09:50:39.505604 20811 net.cpp:620] [Forward] Layer inc3c/conv5_1, param blob 1 data: 1.13423
I1128 09:50:39.505878 20811 net.cpp:608] [Forward] Layer inc3c/conv5_1/bn, top blob inc3c/conv5_1 data: 3.24615e+23
I1128 09:50:39.505920 20811 net.cpp:620] [Forward] Layer inc3c/conv5_1/bn, param blob 0 data: 0
I1128 09:50:39.505954 20811 net.cpp:620] [Forward] Layer inc3c/conv5_1/bn, param blob 1 data: 0
I1128 09:50:39.505966 20811 net.cpp:620] [Forward] Layer inc3c/conv5_1/bn, param blob 2 data: 0
I1128 09:50:39.506083 20811 net.cpp:608] [Forward] Layer inc3c/conv5_1/scale, top blob inc3c/conv5_1 data: 3.24615e+23
I1128 09:50:39.506124 20811 net.cpp:620] [Forward] Layer inc3c/conv5_1/scale, param blob 0 data: 1
I1128 09:50:39.506158 20811 net.cpp:620] [Forward] Layer inc3c/conv5_1/scale, param blob 1 data: 0
I1128 09:50:39.506199 20811 net.cpp:608] [Forward] Layer inc3c/relu5_1, top blob inc3c/conv5_1 data: 2.08847e+23
I1128 09:50:39.506670 20811 net.cpp:608] [Forward] Layer inc3c/conv5_2, top blob inc3c/conv5_2 data: 3.31494e+23
I1128 09:50:39.506717 20811 net.cpp:620] [Forward] Layer inc3c/conv5_2, param blob 0 data: 0.086303
I1128 09:50:39.506754 20811 net.cpp:620] [Forward] Layer inc3c/conv5_2, param blob 1 data: 0.590733
I1128 09:50:39.507264 20811 net.cpp:608] [Forward] Layer inc3c/conv5_2/bn, top blob inc3c/conv5_2 data: 1.04828e+26
I1128 09:50:39.507311 20811 net.cpp:620] [Forward] Layer inc3c/conv5_2/bn, param blob 0 data: 0
I1128 09:50:39.507347 20811 net.cpp:620] [Forward] Layer inc3c/conv5_2/bn, param blob 1 data: 0
I1128 09:50:39.507359 20811 net.cpp:620] [Forward] Layer inc3c/conv5_2/bn, param blob 2 data: 0
I1128 09:50:39.507603 20811 net.cpp:608] [Forward] Layer inc3c/conv5_2/scale, top blob inc3c/conv5_2 data: 1.04828e+26
I1128 09:50:39.507649 20811 net.cpp:620] [Forward] Layer inc3c/conv5_2/scale, param blob 0 data: 1
I1128 09:50:39.507684 20811 net.cpp:620] [Forward] Layer inc3c/conv5_2/scale, param blob 1 data: 0
I1128 09:50:39.507727 20811 net.cpp:608] [Forward] Layer inc3c/relu5_2, top blob inc3c/conv5_2 data: 1.65754e+25
I1128 09:50:39.508246 20811 net.cpp:608] [Forward] Layer inc3c/conv5_3, top blob inc3c/conv5_3 data: 4.54204e+25
I1128 09:50:39.508293 20811 net.cpp:620] [Forward] Layer inc3c/conv5_3, param blob 0 data: 0.0580071
I1128 09:50:39.508329 20811 net.cpp:620] [Forward] Layer inc3c/conv5_3, param blob 1 data: 0.639826
I1128 09:50:39.508842 20811 net.cpp:608] [Forward] Layer inc3c/conv5_3/bn, top blob inc3c/conv5_3 data: 1.43632e+28
I1128 09:50:39.508889 20811 net.cpp:620] [Forward] Layer inc3c/conv5_3/bn, param blob 0 data: 0
I1128 09:50:39.508924 20811 net.cpp:620] [Forward] Layer inc3c/conv5_3/bn, param blob 1 data: 0
I1128 09:50:39.508937 20811 net.cpp:620] [Forward] Layer inc3c/conv5_3/bn, param blob 2 data: 0
I1128 09:50:39.509176 20811 net.cpp:608] [Forward] Layer inc3c/conv5_3/scale, top blob inc3c/conv5_3 data: 1.43632e+28
I1128 09:50:39.509220 20811 net.cpp:620] [Forward] Layer inc3c/conv5_3/scale, param blob 0 data: 1
I1128 09:50:39.509255 20811 net.cpp:620] [Forward] Layer inc3c/conv5_3/scale, param blob 1 data: 0
I1128 09:50:39.509300 20811 net.cpp:608] [Forward] Layer inc3c/relu5_3, top blob inc3c/conv5_3 data: 6.84261e+27
I1128 09:50:39.509591 20811 net.cpp:608] [Forward] Layer inc3c, top blob inc3c data: 1.1556e+27
I1128 09:50:39.509654 20811 net.cpp:608] [Forward] Layer inc3c_inc3c_0_split, top blob inc3c_inc3c_0_split_0 data: 1.1556e+27
I1128 09:50:39.509708 20811 net.cpp:608] [Forward] Layer inc3c_inc3c_0_split, top blob inc3c_inc3c_0_split_1 data: 1.1556e+27
I1128 09:50:39.509763 20811 net.cpp:608] [Forward] Layer inc3c_inc3c_0_split, top blob inc3c_inc3c_0_split_2 data: 1.1556e+27
I1128 09:50:39.510236 20811 net.cpp:608] [Forward] Layer inc3d/conv1, top blob inc3d/conv1 data: 4.49719e+27
I1128 09:50:39.510283 20811 net.cpp:620] [Forward] Layer inc3d/conv1, param blob 0 data: 0.0753045
I1128 09:50:39.510318 20811 net.cpp:620] [Forward] Layer inc3d/conv1, param blob 1 data: 0.961067
I1128 09:50:39.510859 20811 net.cpp:608] [Forward] Layer inc3d/conv1/bn, top blob inc3d/conv1 data: 1.42214e+30
I1128 09:50:39.510905 20811 net.cpp:620] [Forward] Layer inc3d/conv1/bn, param blob 0 data: 0
I1128 09:50:39.510941 20811 net.cpp:620] [Forward] Layer inc3d/conv1/bn, param blob 1 data: 0
I1128 09:50:39.510953 20811 net.cpp:620] [Forward] Layer inc3d/conv1/bn, param blob 2 data: 0
I1128 09:50:39.511229 20811 net.cpp:608] [Forward] Layer inc3d/conv1/scale, top blob inc3d/conv1 data: 1.42214e+30
I1128 09:50:39.511296 20811 net.cpp:620] [Forward] Layer inc3d/conv1/scale, param blob 0 data: 1
I1128 09:50:39.511329 20811 net.cpp:620] [Forward] Layer inc3d/conv1/scale, param blob 1 data: 0
I1128 09:50:39.511382 20811 net.cpp:608] [Forward] Layer inc3d/relu1, top blob inc3d/conv1 data: 9.0921e+29
I1128 09:50:39.511591 20811 net.cpp:608] [Forward] Layer inc3d/conv3_1, top blob inc3d/conv3_1 data: 2.92356e+27
I1128 09:50:39.511634 20811 net.cpp:620] [Forward] Layer inc3d/conv3_1, param blob 0 data: 0.0799943
I1128 09:50:39.511669 20811 net.cpp:620] [Forward] Layer inc3d/conv3_1, param blob 1 data: 1.37066
I1128 09:50:39.512053 20811 net.cpp:608] [Forward] Layer inc3d/conv3_1/bn, top blob inc3d/conv3_1 data: 9.24511e+29
I1128 09:50:39.512099 20811 net.cpp:620] [Forward] Layer inc3d/conv3_1/bn, param blob 0 data: 0
I1128 09:50:39.512157 20811 net.cpp:620] [Forward] Layer inc3d/conv3_1/bn, param blob 1 data: 0
I1128 09:50:39.512177 20811 net.cpp:620] [Forward] Layer inc3d/conv3_1/bn, param blob 2 data: 0
I1128 09:50:39.512300 20811 net.cpp:608] [Forward] Layer inc3d/conv3_1/scale, top blob inc3d/conv3_1 data: 9.24511e+29
I1128 09:50:39.512339 20811 net.cpp:620] [Forward] Layer inc3d/conv3_1/scale, param blob 0 data: 1
I1128 09:50:39.512375 20811 net.cpp:620] [Forward] Layer inc3d/conv3_1/scale, param blob 1 data: 0
I1128 09:50:39.512418 20811 net.cpp:608] [Forward] Layer inc3d/relu3_1, top blob inc3d/conv3_1 data: 3.12071e+29
I1128 09:50:39.512902 20811 net.cpp:608] [Forward] Layer inc3d/conv3_2, top blob inc3d/conv3_2 data: 6.25826e+29
I1128 09:50:39.512962 20811 net.cpp:620] [Forward] Layer inc3d/conv3_2, param blob 0 data: 0.0937155
I1128 09:50:39.512996 20811 net.cpp:620] [Forward] Layer inc3d/conv3_2, param blob 1 data: 1.09642
I1128 09:50:39.513511 20811 net.cpp:608] [Forward] Layer inc3d/conv3_2/bn, top blob inc3d/conv3_2 data: 1.97904e+32
I1128 09:50:39.513567 20811 net.cpp:620] [Forward] Layer inc3d/conv3_2/bn, param blob 0 data: 0
I1128 09:50:39.513602 20811 net.cpp:620] [Forward] Layer inc3d/conv3_2/bn, param blob 1 data: 0
I1128 09:50:39.513615 20811 net.cpp:620] [Forward] Layer inc3d/conv3_2/bn, param blob 2 data: 0
I1128 09:50:39.513870 20811 net.cpp:608] [Forward] Layer inc3d/conv3_2/scale, top blob inc3d/conv3_2 data: 1.97904e+32
I1128 09:50:39.513916 20811 net.cpp:620] [Forward] Layer inc3d/conv3_2/scale, param blob 0 data: 1
I1128 09:50:39.513950 20811 net.cpp:620] [Forward] Layer inc3d/conv3_2/scale, param blob 1 data: 0
I1128 09:50:39.513995 20811 net.cpp:608] [Forward] Layer inc3d/relu3_2, top blob inc3d/conv3_2 data: 8.32386e+31
I1128 09:50:39.514199 20811 net.cpp:608] [Forward] Layer inc3d/conv5_1, top blob inc3d/conv5_1 data: 3.89908e+27
I1128 09:50:39.514242 20811 net.cpp:620] [Forward] Layer inc3d/conv5_1, param blob 0 data: 0.0578877
I1128 09:50:39.514276 20811 net.cpp:620] [Forward] Layer inc3d/conv5_1, param blob 1 data: 1.23464
I1128 09:50:39.514545 20811 net.cpp:608] [Forward] Layer inc3d/conv5_1/bn, top blob inc3d/conv5_1 data: 1.233e+30
I1128 09:50:39.514586 20811 net.cpp:620] [Forward] Layer inc3d/conv5_1/bn, param blob 0 data: 0
I1128 09:50:39.514621 20811 net.cpp:620] [Forward] Layer inc3d/conv5_1/bn, param blob 1 data: 0
I1128 09:50:39.514632 20811 net.cpp:620] [Forward] Layer inc3d/conv5_1/bn, param blob 2 data: 0
I1128 09:50:39.514750 20811 net.cpp:608] [Forward] Layer inc3d/conv5_1/scale, top blob inc3d/conv5_1 data: 1.233e+30
I1128 09:50:39.514789 20811 net.cpp:620] [Forward] Layer inc3d/conv5_1/scale, param blob 0 data: 1
I1128 09:50:39.514825 20811 net.cpp:620] [Forward] Layer inc3d/conv5_1/scale, param blob 1 data: 0
I1128 09:50:39.514869 20811 net.cpp:608] [Forward] Layer inc3d/relu5_1, top blob inc3d/conv5_1 data: 5.8319e+29
I1128 09:50:39.515324 20811 net.cpp:608] [Forward] Layer inc3d/conv5_2, top blob inc3d/conv5_2 data: 1.78984e+30
I1128 09:50:39.515370 20811 net.cpp:620] [Forward] Layer inc3d/conv5_2, param blob 0 data: 0.0884693
I1128 09:50:39.515406 20811 net.cpp:620] [Forward] Layer inc3d/conv5_2, param blob 1 data: 0.890799
I1128 09:50:39.515913 20811 net.cpp:608] [Forward] Layer inc3d/conv5_2/bn, top blob inc3d/conv5_2 data: 5.65996e+32
I1128 09:50:39.515959 20811 net.cpp:620] [Forward] Layer inc3d/conv5_2/bn, param blob 0 data: 0
I1128 09:50:39.515995 20811 net.cpp:620] [Forward] Layer inc3d/conv5_2/bn, param blob 1 data: 0
I1128 09:50:39.516007 20811 net.cpp:620] [Forward] Layer inc3d/conv5_2/bn, param blob 2 data: 0
I1128 09:50:39.516263 20811 net.cpp:608] [Forward] Layer inc3d/conv5_2/scale, top blob inc3d/conv5_2 data: 5.65996e+32
I1128 09:50:39.516310 20811 net.cpp:620] [Forward] Layer inc3d/conv5_2/scale, param blob 0 data: 1
I1128 09:50:39.516345 20811 net.cpp:620] [Forward] Layer inc3d/conv5_2/scale, param blob 1 data: 0
I1128 09:50:39.516387 20811 net.cpp:608] [Forward] Layer inc3d/relu5_2, top blob inc3d/conv5_2 data: 3.69757e+32
I1128 09:50:39.516911 20811 net.cpp:608] [Forward] Layer inc3d/conv5_3, top blob inc3d/conv5_3 data: 7.0097e+32
I1128 09:50:39.516958 20811 net.cpp:620] [Forward] Layer inc3d/conv5_3, param blob 0 data: 0.0551769
I1128 09:50:39.516994 20811 net.cpp:620] [Forward] Layer inc3d/conv5_3, param blob 1 data: 0.728467
I1128 09:50:39.517516 20811 net.cpp:608] [Forward] Layer inc3d/conv5_3/bn, top blob inc3d/conv5_3 data: inf
I1128 09:50:39.517560 20811 net.cpp:620] [Forward] Layer inc3d/conv5_3/bn, param blob 0 data: 0
I1128 09:50:39.517596 20811 net.cpp:620] [Forward] Layer inc3d/conv5_3/bn, param blob 1 data: 0
I1128 09:50:39.517607 20811 net.cpp:620] [Forward] Layer inc3d/conv5_3/bn, param blob 2 data: 0
I1128 09:50:39.517849 20811 net.cpp:608] [Forward] Layer inc3d/conv5_3/scale, top blob inc3d/conv5_3 data: inf
I1128 09:50:39.517891 20811 net.cpp:620] [Forward] Layer inc3d/conv5_3/scale, param blob 0 data: 1
I1128 09:50:39.517925 20811 net.cpp:620] [Forward] Layer inc3d/conv5_3/scale, param blob 1 data: 0
I1128 09:50:39.517967 20811 net.cpp:608] [Forward] Layer inc3d/relu5_3, top blob inc3d/conv5_3 data: inf
I1128 09:50:39.518239 20811 net.cpp:608] [Forward] Layer inc3d, top blob inc3d data: inf
I1128 09:50:39.518301 20811 net.cpp:608] [Forward] Layer inc3d_inc3d_0_split, top blob inc3d_inc3d_0_split_0 data: inf
I1128 09:50:39.518355 20811 net.cpp:608] [Forward] Layer inc3d_inc3d_0_split, top blob inc3d_inc3d_0_split_1 data: inf
I1128 09:50:39.518409 20811 net.cpp:608] [Forward] Layer inc3d_inc3d_0_split, top blob inc3d_inc3d_0_split_2 data: inf
I1128 09:50:39.518776 20811 net.cpp:608] [Forward] Layer inc3e/conv1, top blob inc3e/conv1 data: inf
I1128 09:50:39.518821 20811 net.cpp:620] [Forward] Layer inc3e/conv1, param blob 0 data: 0.0915854
I1128 09:50:39.518857 20811 net.cpp:620] [Forward] Layer inc3e/conv1, param blob 1 data: 0.677549
I1128 09:50:39.519408 20811 net.cpp:608] [Forward] Layer inc3e/conv1/bn, top blob inc3e/conv1 data: inf
I1128 09:50:39.519453 20811 net.cpp:620] [Forward] Layer inc3e/conv1/bn, param blob 0 data: 0
I1128 09:50:39.519487 20811 net.cpp:620] [Forward] Layer inc3e/conv1/bn, param blob 1 data: 0
I1128 09:50:39.519498 20811 net.cpp:620] [Forward] Layer inc3e/conv1/bn, param blob 2 data: 0
I1128 09:50:39.519773 20811 net.cpp:608] [Forward] Layer inc3e/conv1/scale, top blob inc3e/conv1 data: inf
I1128 09:50:39.519815 20811 net.cpp:620] [Forward] Layer inc3e/conv1/scale, param blob 0 data: 1
I1128 09:50:39.519850 20811 net.cpp:620] [Forward] Layer inc3e/conv1/scale, param blob 1 data: 0
I1128 09:50:39.519899 20811 net.cpp:608] [Forward] Layer inc3e/relu1, top blob inc3e/conv1 data: nan
I1128 09:50:39.520105 20811 net.cpp:608] [Forward] Layer inc3e/conv3_1, top blob inc3e/conv3_1 data: inf
I1128 09:50:39.520148 20811 net.cpp:620] [Forward] Layer inc3e/conv3_1, param blob 0 data: 0.0778675
I1128 09:50:39.520181 20811 net.cpp:620] [Forward] Layer inc3e/conv3_1, param blob 1 data: 1.07346
I1128 09:50:39.520457 20811 net.cpp:608] [Forward] Layer inc3e/conv3_1/bn, top blob inc3e/conv3_1 data: inf
..............................
..............................
I1128 09:50:41.903122 20811 net.cpp:636] [Backward] Layer inc3a/pool1, bottom blob conv3_relu3_0_split_0 diff: nan
I1128 09:50:41.903331 20811 net.cpp:636] [Backward] Layer conv3_relu3_0_split, bottom blob conv3 diff: nan
I1128 09:50:41.903434 20811 net.cpp:636] [Backward] Layer relu3, bottom blob conv3 diff: nan
I1128 09:50:41.903702 20811 net.cpp:636] [Backward] Layer conv3/scale, bottom blob conv3 diff: nan
I1128 09:50:41.903734 20811 net.cpp:647] [Backward] Layer conv3/scale, param blob 0 diff: nan
I1128 09:50:41.903765 20811 net.cpp:647] [Backward] Layer conv3/scale, param blob 1 diff: nan
I1128 09:50:41.903908 20811 net.cpp:636] [Backward] Layer conv3/bn, bottom blob conv3 diff: nan
I1128 09:50:41.905504 20811 net.cpp:636] [Backward] Layer conv3, bottom blob conv2 diff: nan
I1128 09:50:41.905541 20811 net.cpp:647] [Backward] Layer conv3, param blob 0 diff: nan
I1128 09:50:41.905571 20811 net.cpp:647] [Backward] Layer conv3, param blob 1 diff: nan
I1128 09:50:41.905740 20811 net.cpp:636] [Backward] Layer relu2, bottom blob conv2 diff: nan
I1128 09:50:41.906466 20811 net.cpp:636] [Backward] Layer conv2/scale, bottom blob conv2 diff: nan
I1128 09:50:41.906498 20811 net.cpp:647] [Backward] Layer conv2/scale, param blob 0 diff: nan
I1128 09:50:41.906527 20811 net.cpp:647] [Backward] Layer conv2/scale, param blob 1 diff: nan
I1128 09:50:41.906772 20811 net.cpp:636] [Backward] Layer conv2/bn, bottom blob conv2 diff: nan
I1128 09:50:41.909994 20811 net.cpp:636] [Backward] Layer conv2, bottom blob conv1 diff: nan
I1128 09:50:41.910030 20811 net.cpp:647] [Backward] Layer conv2, param blob 0 diff: nan
I1128 09:50:41.910060 20811 net.cpp:647] [Backward] Layer conv2, param blob 1 diff: nan
I1128 09:50:41.910457 20811 net.cpp:636] [Backward] Layer relu1, bottom blob conv1 diff: nan
I1128 09:50:41.913013 20811 net.cpp:636] [Backward] Layer conv1/scale, bottom blob conv1 diff: nan
I1128 09:50:41.913045 20811 net.cpp:647] [Backward] Layer conv1/scale, param blob 0 diff: nan
I1128 09:50:41.913074 20811 net.cpp:647] [Backward] Layer conv1/scale, param blob 1 diff: nan
I1128 09:50:41.913661 20811 net.cpp:636] [Backward] Layer conv1/bn, bottom blob conv1 diff: nan
I1128 09:50:41.915452 20811 net.cpp:647] [Backward] Layer conv1, param blob 0 diff: nan
I1128 09:50:41.915484 20811 net.cpp:647] [Backward] Layer conv1, param blob 1 diff: nan
E1128 09:50:41.947321 20811 net.cpp:736] [Backward] All net params (data, diff): L1 norm = (139471, nan); L2 norm = (144.612, nan)
I1128 09:50:41.947340 20811 solver.cpp:228] Iteration 0, loss = nan
I1128 09:50:41.947350 20811 solver.cpp:244] Train net output #0: cls_loss = 65.615 (* 1 = 65.615 loss)
I1128 09:50:41.947358 20811 solver.cpp:244] Train net output #1: loss_bbox = nan (* 1 = nan loss)
I1128 09:50:41.947365 20811 solver.cpp:244] Train net output #2: rpn_loss_bbox = nan (* 1 = nan loss)
I1128 09:50:41.947373 20811 solver.cpp:244] Train net output #3: rpn_loss_cls = 87.3365 (* 1 = 87.3365 loss)

Moreover, the original_690K.model in the lite mode for fine-tuning could run normally.
How can I get normal loss in the test_690K.model for fine-tuning?

How to pretrain a imagenet network

Hi, I read from paper that PVANET is pre-trained with ILSVRC2012, which is an image classification task. How do you pre-train without bbox annotations?

Do "C_Relu" really speed up the network?

I test the C_Relu(with scale after concat) in a network. The original conv layer is 52877, I change this layer to C_Relu structure which means that now the conv layer is 26477. But the training time increased unexpected.
I use CUDA8.0 and CUDNN5.1.
I feel confused. Can you give me some advice?
Thank you very much.
@sanghoon

about cudnn

you said "Do NOT uncomment USE_CUDNN := 1 (for running PVANET, cuDNN is slower than Caffe native implementation)", I want to know, the slower is include training and test? I train my net without cudnn, its so slow

ProposalLayer2 layer in latest release?

In the prototxt file, i see "ProposalLayer" and "ProposalLayer2". However in "lib/rpn/proposal_layer.py", I only find class definition for "ProposalLayer".
Is "ProposalLayer2" missing?

https://github.com/sanghoon/pva-faster-rcnn/blob/master/models/pvanet/pva9.1/faster_rcnn_train_test_21cls.pt

layer {
name: 'proposal'
type: 'Python'
bottom: 'rpn_cls_prob_reshape'
bottom: 'rpn_bbox_pred'
bottom: 'im_info'
bottom: 'gt_boxes'
top: 'rois'
top: 'labels'
top: 'bbox_targets'
top: 'bbox_inside_weights'
top: 'bbox_outside_weights'
include { phase: TRAIN }
python_param {
module: 'rpn.proposal_layer'
layer: 'ProposalLayer2'
param_str: "{'feat_stride': 16, 'num_classes': 21, 'ratios': [0.333, 0.5, 0.667, 1, 1.5, 2, 3], 'scales': [2, 3, 5, 9, 16, 32]}"
}
}
layer {
name: 'proposal'
type: 'Python'
bottom: 'rpn_cls_prob_reshape'
bottom: 'rpn_bbox_pred'
bottom: 'im_info'
top: 'rois'
top: 'scores'
include { phase: TEST }
python_param {
module: 'rpn.proposal_layer'
layer: 'ProposalLayer'
param_str: "{'feat_stride': 16, 'ratios': [0.333, 0.5, 0.667, 1, 1.5, 2, 3], 'scales': [2, 3, 5, 9, 16, 32]}"
}
}

Loss plateau detection

Hello!
Sorry if my question is off-top.
Many interesting things i think about when read your paper.
One of this thing how to train own architecture from scratch
And idea to detect plateau is good. And you write that gives significant improvement. I analysis my graphics of accuracy and i want to try create "plateau detector".
I know that you use inhouse library. But maybe you give some advice how to create it in Caffe:

  1. What interval (period, number of iteration) is good to analyze?
  2. Train loss analyze or some constant validation set part?
  3. What step policy is prefer -- 0.1 step, 0.2 step or 0.5 step?

train error:Maybe the train file has a bug

I0102 17:09:50.620123 24353 net.cpp:150] Setting up conv4_1/incep/poolproj/relu
I0102 17:09:50.620136 24353 net.cpp:157] Top shape: 1 128 37 62 (293632)
I0102 17:09:50.620141 24353 net.cpp:165] Memory required for data: 824862108
I0102 17:09:50.620146 24353 layer_factory.hpp:77] Creating layer conv4_1/incep
I0102 17:09:50.620153 24353 net.cpp:100] Creating Layer conv4_1/incep
I0102 17:09:50.620158 24353 net.cpp:434] conv4_1/incep <- conv4_1/incep/0
I0102 17:09:50.620164 24353 net.cpp:434] conv4_1/incep <- conv4_1/incep/1_0
I0102 17:09:50.620174 24353 net.cpp:434] conv4_1/incep <- conv4_1/incep/2_1
I0102 17:09:50.620179 24353 net.cpp:434] conv4_1/incep <- conv4_1/incep/poolproj
I0102 17:09:50.620185 24353 net.cpp:408] conv4_1/incep -> conv4_1/incep
F0102 17:09:50.620199 24353 concat_layer.cpp:42] Check failed: top_shape[j] == bottom[i]->shape(j) (38 vs. 37) All inputs must have the same shape, except at concat_axis.
*** Check failure stack trace: ***
Aborted (core dumped)

the training file is like this:

name: "PVANET"

################################################################################

Input

################################################################################

layer {
name: 'input-data'
type: 'Python'
top: 'data'
top: 'im_info'
top: 'gt_boxes'
include { phase: TRAIN }
python_param {
module: 'roi_data_layer.layer'
layer: 'RoIDataLayer'
param_str: "'num_classes': 21"
}
}

layer {
name: "input-data"
type: "DummyData"
top: "data"
top: "im_info"
include { phase: TEST }
dummy_data_param {
shape { dim: 1 dim: 3 dim: 224 dim: 224 }
shape { dim: 1 dim: 3 }
}
}

################################################################################

Convolution

################################################################################

layer {
name: "conv1_1/conv"
type: "Convolution"
bottom: "data"
top: "conv1_1/conv"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 16
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 3
pad_w: 3
kernel_h: 7
kernel_w: 7
stride_h: 2
stride_w: 2
}
}
layer {
name: "conv1_1/bn"
type: "BatchNorm"
bottom: "conv1_1/conv"
top: "conv1_1/conv"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv1_1/neg"
type: "Power"
bottom: "conv1_1/conv"
top: "conv1_1/neg"
power_param {
power: 1
scale: -1.0
shift: 0
}
}
layer {
name: "conv1_1/concat"
type: "Concat"
bottom: "conv1_1/conv"
bottom: "conv1_1/neg"
top: "conv1_1"
}
layer {
name: "conv1_1/scale"
type: "Scale"
bottom: "conv1_1"
top: "conv1_1"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 2.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv1_1/relu"
type: "ReLU"
bottom: "conv1_1"
top: "conv1_1"
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1_1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
pad: 0
}
}
layer {
name: "conv2_1/1/conv"
type: "Convolution"
bottom: "pool1"
top: "conv2_1/1"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 24
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv2_1/2/bn"
type: "BatchNorm"
bottom: "conv2_1/1"
top: "conv2_1/2/pre"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv2_1/2/bn_scale"
type: "Scale"
bottom: "conv2_1/2/pre"
top: "conv2_1/2/pre"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv2_1/2/relu"
type: "ReLU"
bottom: "conv2_1/2/pre"
top: "conv2_1/2/pre"
}
layer {
name: "conv2_1/2/conv"
type: "Convolution"
bottom: "conv2_1/2/pre"
top: "conv2_1/2"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 24
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv2_1/3/bn"
type: "BatchNorm"
bottom: "conv2_1/2"
top: "conv2_1/3/pre"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv2_1/3/neg"
type: "Power"
bottom: "conv2_1/3/pre"
top: "conv2_1/3/neg"
power_param {
power: 1
scale: -1.0
shift: 0
}
}
layer {
name: "conv2_1/3/concat"
type: "Concat"
bottom: "conv2_1/3/pre"
bottom: "conv2_1/3/neg"
top: "conv2_1/3/preAct"
}
layer {
name: "conv2_1/3/scale"
type: "Scale"
bottom: "conv2_1/3/preAct"
top: "conv2_1/3/preAct"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 2.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv2_1/3/relu"
type: "ReLU"
bottom: "conv2_1/3/preAct"
top: "conv2_1/3/preAct"
}
layer {
name: "conv2_1/3/conv"
type: "Convolution"
bottom: "conv2_1/3/preAct"
top: "conv2_1/3"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 64
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv2_1/proj"
type: "Convolution"
bottom: "pool1"
top: "conv2_1/proj"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 64
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv2_1"
type: "Eltwise"
bottom: "conv2_1/3"
bottom: "conv2_1/proj"
top: "conv2_1"
eltwise_param {
operation: SUM
coeff: 1
coeff: 1
}
}
layer {
name: "conv2_2/1/bn"
type: "BatchNorm"
bottom: "conv2_1"
top: "conv2_2/1/pre"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv2_2/1/bn_scale"
type: "Scale"
bottom: "conv2_2/1/pre"
top: "conv2_2/1/pre"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv2_2/1/relu"
type: "ReLU"
bottom: "conv2_2/1/pre"
top: "conv2_2/1/pre"
}
layer {
name: "conv2_2/1/conv"
type: "Convolution"
bottom: "conv2_2/1/pre"
top: "conv2_2/1"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 24
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv2_2/2/bn"
type: "BatchNorm"
bottom: "conv2_2/1"
top: "conv2_2/2/pre"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv2_2/2/bn_scale"
type: "Scale"
bottom: "conv2_2/2/pre"
top: "conv2_2/2/pre"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv2_2/2/relu"
type: "ReLU"
bottom: "conv2_2/2/pre"
top: "conv2_2/2/pre"
}
layer {
name: "conv2_2/2/conv"
type: "Convolution"
bottom: "conv2_2/2/pre"
top: "conv2_2/2"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 24
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv2_2/3/bn"
type: "BatchNorm"
bottom: "conv2_2/2"
top: "conv2_2/3/pre"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv2_2/3/neg"
type: "Power"
bottom: "conv2_2/3/pre"
top: "conv2_2/3/neg"
power_param {
power: 1
scale: -1.0
shift: 0
}
}
layer {
name: "conv2_2/3/concat"
type: "Concat"
bottom: "conv2_2/3/pre"
bottom: "conv2_2/3/neg"
top: "conv2_2/3/preAct"
}
layer {
name: "conv2_2/3/scale"
type: "Scale"
bottom: "conv2_2/3/preAct"
top: "conv2_2/3/preAct"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 2.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv2_2/3/relu"
type: "ReLU"
bottom: "conv2_2/3/preAct"
top: "conv2_2/3/preAct"
}
layer {
name: "conv2_2/3/conv"
type: "Convolution"
bottom: "conv2_2/3/preAct"
top: "conv2_2/3"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 64
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv2_2/input"
type: "Power"
bottom: "conv2_1"
top: "conv2_2/input"
power_param {
power: 1
scale: 1
shift: 0
}
}
layer {
name: "conv2_2"
type: "Eltwise"
bottom: "conv2_2/3"
bottom: "conv2_2/input"
top: "conv2_2"
eltwise_param {
operation: SUM
coeff: 1
coeff: 1
}
}
layer {
name: "conv2_3/1/bn"
type: "BatchNorm"
bottom: "conv2_2"
top: "conv2_3/1/pre"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv2_3/1/bn_scale"
type: "Scale"
bottom: "conv2_3/1/pre"
top: "conv2_3/1/pre"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv2_3/1/relu"
type: "ReLU"
bottom: "conv2_3/1/pre"
top: "conv2_3/1/pre"
}
layer {
name: "conv2_3/1/conv"
type: "Convolution"
bottom: "conv2_3/1/pre"
top: "conv2_3/1"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 24
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv2_3/2/bn"
type: "BatchNorm"
bottom: "conv2_3/1"
top: "conv2_3/2/pre"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv2_3/2/bn_scale"
type: "Scale"
bottom: "conv2_3/2/pre"
top: "conv2_3/2/pre"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv2_3/2/relu"
type: "ReLU"
bottom: "conv2_3/2/pre"
top: "conv2_3/2/pre"
}
layer {
name: "conv2_3/2/conv"
type: "Convolution"
bottom: "conv2_3/2/pre"
top: "conv2_3/2"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 24
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv2_3/3/bn"
type: "BatchNorm"
bottom: "conv2_3/2"
top: "conv2_3/3/pre"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv2_3/3/neg"
type: "Power"
bottom: "conv2_3/3/pre"
top: "conv2_3/3/neg"
power_param {
power: 1
scale: -1.0
shift: 0
}
}
layer {
name: "conv2_3/3/concat"
type: "Concat"
bottom: "conv2_3/3/pre"
bottom: "conv2_3/3/neg"
top: "conv2_3/3/preAct"
}
layer {
name: "conv2_3/3/scale"
type: "Scale"
bottom: "conv2_3/3/preAct"
top: "conv2_3/3/preAct"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 2.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv2_3/3/relu"
type: "ReLU"
bottom: "conv2_3/3/preAct"
top: "conv2_3/3/preAct"
}
layer {
name: "conv2_3/3/conv"
type: "Convolution"
bottom: "conv2_3/3/preAct"
top: "conv2_3/3"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 64
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv2_3/input"
type: "Power"
bottom: "conv2_2"
top: "conv2_3/input"
power_param {
power: 1
scale: 1
shift: 0
}
}
layer {
name: "conv2_3"
type: "Eltwise"
bottom: "conv2_3/3"
bottom: "conv2_3/input"
top: "conv2_3"
eltwise_param {
operation: SUM
coeff: 1
coeff: 1
}
}
layer {
name: "conv3_1/1/bn"
type: "BatchNorm"
bottom: "conv2_3"
top: "conv3_1/1/pre"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv3_1/1/bn_scale"
type: "Scale"
bottom: "conv3_1/1/pre"
top: "conv3_1/1/pre"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv3_1/1/relu"
type: "ReLU"
bottom: "conv3_1/1/pre"
top: "conv3_1/1/pre"
}
layer {
name: "conv3_1/1/conv"
type: "Convolution"
bottom: "conv3_1/1/pre"
top: "conv3_1/1"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 48
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 2
stride_w: 2
}
}
layer {
name: "conv3_1/2/bn"
type: "BatchNorm"
bottom: "conv3_1/1"
top: "conv3_1/2/pre"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv3_1/2/bn_scale"
type: "Scale"
bottom: "conv3_1/2/pre"
top: "conv3_1/2/pre"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv3_1/2/relu"
type: "ReLU"
bottom: "conv3_1/2/pre"
top: "conv3_1/2/pre"
}
layer {
name: "conv3_1/2/conv"
type: "Convolution"
bottom: "conv3_1/2/pre"
top: "conv3_1/2"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 48
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv3_1/3/bn"
type: "BatchNorm"
bottom: "conv3_1/2"
top: "conv3_1/3/pre"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv3_1/3/neg"
type: "Power"
bottom: "conv3_1/3/pre"
top: "conv3_1/3/neg"
power_param {
power: 1
scale: -1.0
shift: 0
}
}
layer {
name: "conv3_1/3/concat"
type: "Concat"
bottom: "conv3_1/3/pre"
bottom: "conv3_1/3/neg"
top: "conv3_1/3/preAct"
}
layer {
name: "conv3_1/3/scale"
type: "Scale"
bottom: "conv3_1/3/preAct"
top: "conv3_1/3/preAct"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 2.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv3_1/3/relu"
type: "ReLU"
bottom: "conv3_1/3/preAct"
top: "conv3_1/3/preAct"
}
layer {
name: "conv3_1/3/conv"
type: "Convolution"
bottom: "conv3_1/3/preAct"
top: "conv3_1/3"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 128
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv3_1/proj"
type: "Convolution"
bottom: "conv3_1/1/pre"
top: "conv3_1/proj"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 128
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 2
stride_w: 2
}
}
layer {
name: "conv3_1"
type: "Eltwise"
bottom: "conv3_1/3"
bottom: "conv3_1/proj"
top: "conv3_1"
eltwise_param {
operation: SUM
coeff: 1
coeff: 1
}
}
layer {
name: "conv3_2/1/bn"
type: "BatchNorm"
bottom: "conv3_1"
top: "conv3_2/1/pre"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv3_2/1/bn_scale"
type: "Scale"
bottom: "conv3_2/1/pre"
top: "conv3_2/1/pre"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv3_2/1/relu"
type: "ReLU"
bottom: "conv3_2/1/pre"
top: "conv3_2/1/pre"
}
layer {
name: "conv3_2/1/conv"
type: "Convolution"
bottom: "conv3_2/1/pre"
top: "conv3_2/1"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 48
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv3_2/2/bn"
type: "BatchNorm"
bottom: "conv3_2/1"
top: "conv3_2/2/pre"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv3_2/2/bn_scale"
type: "Scale"
bottom: "conv3_2/2/pre"
top: "conv3_2/2/pre"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv3_2/2/relu"
type: "ReLU"
bottom: "conv3_2/2/pre"
top: "conv3_2/2/pre"
}
layer {
name: "conv3_2/2/conv"
type: "Convolution"
bottom: "conv3_2/2/pre"
top: "conv3_2/2"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 48
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv3_2/3/bn"
type: "BatchNorm"
bottom: "conv3_2/2"
top: "conv3_2/3/pre"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv3_2/3/neg"
type: "Power"
bottom: "conv3_2/3/pre"
top: "conv3_2/3/neg"
power_param {
power: 1
scale: -1.0
shift: 0
}
}
layer {
name: "conv3_2/3/concat"
type: "Concat"
bottom: "conv3_2/3/pre"
bottom: "conv3_2/3/neg"
top: "conv3_2/3/preAct"
}
layer {
name: "conv3_2/3/scale"
type: "Scale"
bottom: "conv3_2/3/preAct"
top: "conv3_2/3/preAct"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 2.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv3_2/3/relu"
type: "ReLU"
bottom: "conv3_2/3/preAct"
top: "conv3_2/3/preAct"
}
layer {
name: "conv3_2/3/conv"
type: "Convolution"
bottom: "conv3_2/3/preAct"
top: "conv3_2/3"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 128
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv3_2/input"
type: "Power"
bottom: "conv3_1"
top: "conv3_2/input"
power_param {
power: 1
scale: 1
shift: 0
}
}
layer {
name: "conv3_2"
type: "Eltwise"
bottom: "conv3_2/3"
bottom: "conv3_2/input"
top: "conv3_2"
eltwise_param {
operation: SUM
coeff: 1
coeff: 1
}
}
layer {
name: "conv3_3/1/bn"
type: "BatchNorm"
bottom: "conv3_2"
top: "conv3_3/1/pre"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv3_3/1/bn_scale"
type: "Scale"
bottom: "conv3_3/1/pre"
top: "conv3_3/1/pre"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv3_3/1/relu"
type: "ReLU"
bottom: "conv3_3/1/pre"
top: "conv3_3/1/pre"
}
layer {
name: "conv3_3/1/conv"
type: "Convolution"
bottom: "conv3_3/1/pre"
top: "conv3_3/1"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 48
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv3_3/2/bn"
type: "BatchNorm"
bottom: "conv3_3/1"
top: "conv3_3/2/pre"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv3_3/2/bn_scale"
type: "Scale"
bottom: "conv3_3/2/pre"
top: "conv3_3/2/pre"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv3_3/2/relu"
type: "ReLU"
bottom: "conv3_3/2/pre"
top: "conv3_3/2/pre"
}
layer {
name: "conv3_3/2/conv"
type: "Convolution"
bottom: "conv3_3/2/pre"
top: "conv3_3/2"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 48
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv3_3/3/bn"
type: "BatchNorm"
bottom: "conv3_3/2"
top: "conv3_3/3/pre"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv3_3/3/neg"
type: "Power"
bottom: "conv3_3/3/pre"
top: "conv3_3/3/neg"
power_param {
power: 1
scale: -1.0
shift: 0
}
}
layer {
name: "conv3_3/3/concat"
type: "Concat"
bottom: "conv3_3/3/pre"
bottom: "conv3_3/3/neg"
top: "conv3_3/3/preAct"
}
layer {
name: "conv3_3/3/scale"
type: "Scale"
bottom: "conv3_3/3/preAct"
top: "conv3_3/3/preAct"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 2.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv3_3/3/relu"
type: "ReLU"
bottom: "conv3_3/3/preAct"
top: "conv3_3/3/preAct"
}
layer {
name: "conv3_3/3/conv"
type: "Convolution"
bottom: "conv3_3/3/preAct"
top: "conv3_3/3"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 128
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv3_3/input"
type: "Power"
bottom: "conv3_2"
top: "conv3_3/input"
power_param {
power: 1
scale: 1
shift: 0
}
}
layer {
name: "conv3_3"
type: "Eltwise"
bottom: "conv3_3/3"
bottom: "conv3_3/input"
top: "conv3_3"
eltwise_param {
operation: SUM
coeff: 1
coeff: 1
}
}
layer {
name: "conv3_4/1/bn"
type: "BatchNorm"
bottom: "conv3_3"
top: "conv3_4/1/pre"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv3_4/1/bn_scale"
type: "Scale"
bottom: "conv3_4/1/pre"
top: "conv3_4/1/pre"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv3_4/1/relu"
type: "ReLU"
bottom: "conv3_4/1/pre"
top: "conv3_4/1/pre"
}
layer {
name: "conv3_4/1/conv"
type: "Convolution"
bottom: "conv3_4/1/pre"
top: "conv3_4/1"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 48
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv3_4/2/bn"
type: "BatchNorm"
bottom: "conv3_4/1"
top: "conv3_4/2/pre"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv3_4/2/bn_scale"
type: "Scale"
bottom: "conv3_4/2/pre"
top: "conv3_4/2/pre"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv3_4/2/relu"
type: "ReLU"
bottom: "conv3_4/2/pre"
top: "conv3_4/2/pre"
}
layer {
name: "conv3_4/2/conv"
type: "Convolution"
bottom: "conv3_4/2/pre"
top: "conv3_4/2"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 48
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv3_4/3/bn"
type: "BatchNorm"
bottom: "conv3_4/2"
top: "conv3_4/3/pre"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv3_4/3/neg"
type: "Power"
bottom: "conv3_4/3/pre"
top: "conv3_4/3/neg"
power_param {
power: 1
scale: -1.0
shift: 0
}
}
layer {
name: "conv3_4/3/concat"
type: "Concat"
bottom: "conv3_4/3/pre"
bottom: "conv3_4/3/neg"
top: "conv3_4/3/preAct"
}
layer {
name: "conv3_4/3/scale"
type: "Scale"
bottom: "conv3_4/3/preAct"
top: "conv3_4/3/preAct"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 2.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv3_4/3/relu"
type: "ReLU"
bottom: "conv3_4/3/preAct"
top: "conv3_4/3/preAct"
}
layer {
name: "conv3_4/3/conv"
type: "Convolution"
bottom: "conv3_4/3/preAct"
top: "conv3_4/3"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 128
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv3_4/input"
type: "Power"
bottom: "conv3_3"
top: "conv3_4/input"
power_param {
power: 1
scale: 1
shift: 0
}
}
layer {
name: "conv3_4"
type: "Eltwise"
bottom: "conv3_4/3"
bottom: "conv3_4/input"
top: "conv3_4"
eltwise_param {
operation: SUM
coeff: 1
coeff: 1
}
}
layer {
name: "conv4_1/incep/bn"
type: "BatchNorm"
bottom: "conv3_4"
top: "conv4_1/incep/pre"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv4_1/incep/bn_scale"
type: "Scale"
bottom: "conv4_1/incep/pre"
top: "conv4_1/incep/pre"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv4_1/incep/relu"
type: "ReLU"
bottom: "conv4_1/incep/pre"
top: "conv4_1/incep/pre"
}
layer {
name: "conv4_1/incep/0/conv"
type: "Convolution"
bottom: "conv4_1/incep/pre"
top: "conv4_1/incep/0"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 64
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 2
stride_w: 2
}
}
layer {
name: "conv4_1/incep/0/bn"
type: "BatchNorm"
bottom: "conv4_1/incep/0"
top: "conv4_1/incep/0"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv4_1/incep/0/bn_scale"
type: "Scale"
bottom: "conv4_1/incep/0"
top: "conv4_1/incep/0"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv4_1/incep/0/relu"
type: "ReLU"
bottom: "conv4_1/incep/0"
top: "conv4_1/incep/0"
}
layer {
name: "conv4_1/incep/1_reduce/conv"
type: "Convolution"
bottom: "conv4_1/incep/pre"
top: "conv4_1/incep/1_reduce"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 48
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 2
stride_w: 2
}
}
layer {
name: "conv4_1/incep/1_reduce/bn"
type: "BatchNorm"
bottom: "conv4_1/incep/1_reduce"
top: "conv4_1/incep/1_reduce"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv4_1/incep/1_reduce/bn_scale"
type: "Scale"
bottom: "conv4_1/incep/1_reduce"
top: "conv4_1/incep/1_reduce"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv4_1/incep/1_reduce/relu"
type: "ReLU"
bottom: "conv4_1/incep/1_reduce"
top: "conv4_1/incep/1_reduce"
}
layer {
name: "conv4_1/incep/1_0/conv"
type: "Convolution"
bottom: "conv4_1/incep/1_reduce"
top: "conv4_1/incep/1_0"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 128
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv4_1/incep/1_0/bn"
type: "BatchNorm"
bottom: "conv4_1/incep/1_0"
top: "conv4_1/incep/1_0"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv4_1/incep/1_0/bn_scale"
type: "Scale"
bottom: "conv4_1/incep/1_0"
top: "conv4_1/incep/1_0"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv4_1/incep/1_0/relu"
type: "ReLU"
bottom: "conv4_1/incep/1_0"
top: "conv4_1/incep/1_0"
}
layer {
name: "conv4_1/incep/2_reduce/conv"
type: "Convolution"
bottom: "conv4_1/incep/pre"
top: "conv4_1/incep/2_reduce"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 24
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 2
stride_w: 2
}
}
layer {
name: "conv4_1/incep/2_reduce/bn"
type: "BatchNorm"
bottom: "conv4_1/incep/2_reduce"
top: "conv4_1/incep/2_reduce"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv4_1/incep/2_reduce/bn_scale"
type: "Scale"
bottom: "conv4_1/incep/2_reduce"
top: "conv4_1/incep/2_reduce"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv4_1/incep/2_reduce/relu"
type: "ReLU"
bottom: "conv4_1/incep/2_reduce"
top: "conv4_1/incep/2_reduce"
}
layer {
name: "conv4_1/incep/2_0/conv"
type: "Convolution"
bottom: "conv4_1/incep/2_reduce"
top: "conv4_1/incep/2_0"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 48
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv4_1/incep/2_0/bn"
type: "BatchNorm"
bottom: "conv4_1/incep/2_0"
top: "conv4_1/incep/2_0"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv4_1/incep/2_0/bn_scale"
type: "Scale"
bottom: "conv4_1/incep/2_0"
top: "conv4_1/incep/2_0"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv4_1/incep/2_0/relu"
type: "ReLU"
bottom: "conv4_1/incep/2_0"
top: "conv4_1/incep/2_0"
}
layer {
name: "conv4_1/incep/2_1/conv"
type: "Convolution"
bottom: "conv4_1/incep/2_0"
top: "conv4_1/incep/2_1"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 48
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv4_1/incep/2_1/bn"
type: "BatchNorm"
bottom: "conv4_1/incep/2_1"
top: "conv4_1/incep/2_1"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv4_1/incep/2_1/bn_scale"
type: "Scale"
bottom: "conv4_1/incep/2_1"
top: "conv4_1/incep/2_1"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv4_1/incep/2_1/relu"
type: "ReLU"
bottom: "conv4_1/incep/2_1"
top: "conv4_1/incep/2_1"
}
layer {
name: "conv4_1/incep/pool"
type: "Pooling"
bottom: "conv4_1/incep/pre"
top: "conv4_1/incep/pool"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
pad: 0
}
}
layer {
name: "conv4_1/incep/poolproj/conv"
type: "Convolution"
bottom: "conv4_1/incep/pool"
top: "conv4_1/incep/poolproj"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 128
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv4_1/incep/poolproj/bn"
type: "BatchNorm"
bottom: "conv4_1/incep/poolproj"
top: "conv4_1/incep/poolproj"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv4_1/incep/poolproj/bn_scale"
type: "Scale"
bottom: "conv4_1/incep/poolproj"
top: "conv4_1/incep/poolproj"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv4_1/incep/poolproj/relu"
type: "ReLU"
bottom: "conv4_1/incep/poolproj"
top: "conv4_1/incep/poolproj"
}
layer {
name: "conv4_1/incep"
type: "Concat"
bottom: "conv4_1/incep/0"
bottom: "conv4_1/incep/1_0"
bottom: "conv4_1/incep/2_1"
bottom: "conv4_1/incep/poolproj"
top: "conv4_1/incep"
}
layer {
name: "conv4_1/out/conv"
type: "Convolution"
bottom: "conv4_1/incep"
top: "conv4_1/out"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 256
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv4_1/proj"
type: "Convolution"
bottom: "conv3_4"
top: "conv4_1/proj"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 256
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 2
stride_w: 2
}
}
layer {
name: "conv4_1"
type: "Eltwise"
bottom: "conv4_1/out"
bottom: "conv4_1/proj"
top: "conv4_1"
eltwise_param {
operation: SUM
coeff: 1
coeff: 1
}
}
layer {
name: "conv4_2/incep/bn"
type: "BatchNorm"
bottom: "conv4_1"
top: "conv4_2/incep/pre"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv4_2/incep/bn_scale"
type: "Scale"
bottom: "conv4_2/incep/pre"
top: "conv4_2/incep/pre"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv4_2/incep/relu"
type: "ReLU"
bottom: "conv4_2/incep/pre"
top: "conv4_2/incep/pre"
}
layer {
name: "conv4_2/incep/0/conv"
type: "Convolution"
bottom: "conv4_2/incep/pre"
top: "conv4_2/incep/0"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 64
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv4_2/incep/0/bn"
type: "BatchNorm"
bottom: "conv4_2/incep/0"
top: "conv4_2/incep/0"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv4_2/incep/0/bn_scale"
type: "Scale"
bottom: "conv4_2/incep/0"
top: "conv4_2/incep/0"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv4_2/incep/0/relu"
type: "ReLU"
bottom: "conv4_2/incep/0"
top: "conv4_2/incep/0"
}
layer {
name: "conv4_2/incep/1_reduce/conv"
type: "Convolution"
bottom: "conv4_2/incep/pre"
top: "conv4_2/incep/1_reduce"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 64
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv4_2/incep/1_reduce/bn"
type: "BatchNorm"
bottom: "conv4_2/incep/1_reduce"
top: "conv4_2/incep/1_reduce"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv4_2/incep/1_reduce/bn_scale"
type: "Scale"
bottom: "conv4_2/incep/1_reduce"
top: "conv4_2/incep/1_reduce"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv4_2/incep/1_reduce/relu"
type: "ReLU"
bottom: "conv4_2/incep/1_reduce"
top: "conv4_2/incep/1_reduce"
}
layer {
name: "conv4_2/incep/1_0/conv"
type: "Convolution"
bottom: "conv4_2/incep/1_reduce"
top: "conv4_2/incep/1_0"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 128
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv4_2/incep/1_0/bn"
type: "BatchNorm"
bottom: "conv4_2/incep/1_0"
top: "conv4_2/incep/1_0"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv4_2/incep/1_0/bn_scale"
type: "Scale"
bottom: "conv4_2/incep/1_0"
top: "conv4_2/incep/1_0"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv4_2/incep/1_0/relu"
type: "ReLU"
bottom: "conv4_2/incep/1_0"
top: "conv4_2/incep/1_0"
}
layer {
name: "conv4_2/incep/2_reduce/conv"
type: "Convolution"
bottom: "conv4_2/incep/pre"
top: "conv4_2/incep/2_reduce"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 24
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv4_2/incep/2_reduce/bn"
type: "BatchNorm"
bottom: "conv4_2/incep/2_reduce"
top: "conv4_2/incep/2_reduce"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv4_2/incep/2_reduce/bn_scale"
type: "Scale"
bottom: "conv4_2/incep/2_reduce"
top: "conv4_2/incep/2_reduce"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv4_2/incep/2_reduce/relu"
type: "ReLU"
bottom: "conv4_2/incep/2_reduce"
top: "conv4_2/incep/2_reduce"
}
layer {
name: "conv4_2/incep/2_0/conv"
type: "Convolution"
bottom: "conv4_2/incep/2_reduce"
top: "conv4_2/incep/2_0"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 48
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv4_2/incep/2_0/bn"
type: "BatchNorm"
bottom: "conv4_2/incep/2_0"
top: "conv4_2/incep/2_0"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv4_2/incep/2_0/bn_scale"
type: "Scale"
bottom: "conv4_2/incep/2_0"
top: "conv4_2/incep/2_0"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv4_2/incep/2_0/relu"
type: "ReLU"
bottom: "conv4_2/incep/2_0"
top: "conv4_2/incep/2_0"
}
layer {
name: "conv4_2/incep/2_1/conv"
type: "Convolution"
bottom: "conv4_2/incep/2_0"
top: "conv4_2/incep/2_1"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 48
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv4_2/incep/2_1/bn"
type: "BatchNorm"
bottom: "conv4_2/incep/2_1"
top: "conv4_2/incep/2_1"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv4_2/incep/2_1/bn_scale"
type: "Scale"
bottom: "conv4_2/incep/2_1"
top: "conv4_2/incep/2_1"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv4_2/incep/2_1/relu"
type: "ReLU"
bottom: "conv4_2/incep/2_1"
top: "conv4_2/incep/2_1"
}
layer {
name: "conv4_2/incep"
type: "Concat"
bottom: "conv4_2/incep/0"
bottom: "conv4_2/incep/1_0"
bottom: "conv4_2/incep/2_1"
top: "conv4_2/incep"
}
layer {
name: "conv4_2/out/conv"
type: "Convolution"
bottom: "conv4_2/incep"
top: "conv4_2/out"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 256
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv4_2/input"
type: "Power"
bottom: "conv4_1"
top: "conv4_2/input"
power_param {
power: 1
scale: 1
shift: 0
}
}
layer {
name: "conv4_2"
type: "Eltwise"
bottom: "conv4_2/out"
bottom: "conv4_2/input"
top: "conv4_2"
eltwise_param {
operation: SUM
coeff: 1
coeff: 1
}
}
layer {
name: "conv4_3/incep/bn"
type: "BatchNorm"
bottom: "conv4_2"
top: "conv4_3/incep/pre"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv4_3/incep/bn_scale"
type: "Scale"
bottom: "conv4_3/incep/pre"
top: "conv4_3/incep/pre"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv4_3/incep/relu"
type: "ReLU"
bottom: "conv4_3/incep/pre"
top: "conv4_3/incep/pre"
}
layer {
name: "conv4_3/incep/0/conv"
type: "Convolution"
bottom: "conv4_3/incep/pre"
top: "conv4_3/incep/0"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 64
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv4_3/incep/0/bn"
type: "BatchNorm"
bottom: "conv4_3/incep/0"
top: "conv4_3/incep/0"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv4_3/incep/0/bn_scale"
type: "Scale"
bottom: "conv4_3/incep/0"
top: "conv4_3/incep/0"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv4_3/incep/0/relu"
type: "ReLU"
bottom: "conv4_3/incep/0"
top: "conv4_3/incep/0"
}
layer {
name: "conv4_3/incep/1_reduce/conv"
type: "Convolution"
bottom: "conv4_3/incep/pre"
top: "conv4_3/incep/1_reduce"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 64
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv4_3/incep/1_reduce/bn"
type: "BatchNorm"
bottom: "conv4_3/incep/1_reduce"
top: "conv4_3/incep/1_reduce"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv4_3/incep/1_reduce/bn_scale"
type: "Scale"
bottom: "conv4_3/incep/1_reduce"
top: "conv4_3/incep/1_reduce"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv4_3/incep/1_reduce/relu"
type: "ReLU"
bottom: "conv4_3/incep/1_reduce"
top: "conv4_3/incep/1_reduce"
}
layer {
name: "conv4_3/incep/1_0/conv"
type: "Convolution"
bottom: "conv4_3/incep/1_reduce"
top: "conv4_3/incep/1_0"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 128
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv4_3/incep/1_0/bn"
type: "BatchNorm"
bottom: "conv4_3/incep/1_0"
top: "conv4_3/incep/1_0"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv4_3/incep/1_0/bn_scale"
type: "Scale"
bottom: "conv4_3/incep/1_0"
top: "conv4_3/incep/1_0"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv4_3/incep/1_0/relu"
type: "ReLU"
bottom: "conv4_3/incep/1_0"
top: "conv4_3/incep/1_0"
}
layer {
name: "conv4_3/incep/2_reduce/conv"
type: "Convolution"
bottom: "conv4_3/incep/pre"
top: "conv4_3/incep/2_reduce"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 24
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv4_3/incep/2_reduce/bn"
type: "BatchNorm"
bottom: "conv4_3/incep/2_reduce"
top: "conv4_3/incep/2_reduce"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv4_3/incep/2_reduce/bn_scale"
type: "Scale"
bottom: "conv4_3/incep/2_reduce"
top: "conv4_3/incep/2_reduce"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv4_3/incep/2_reduce/relu"
type: "ReLU"
bottom: "conv4_3/incep/2_reduce"
top: "conv4_3/incep/2_reduce"
}
layer {
name: "conv4_3/incep/2_0/conv"
type: "Convolution"
bottom: "conv4_3/incep/2_reduce"
top: "conv4_3/incep/2_0"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 48
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv4_3/incep/2_0/bn"
type: "BatchNorm"
bottom: "conv4_3/incep/2_0"
top: "conv4_3/incep/2_0"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv4_3/incep/2_0/bn_scale"
type: "Scale"
bottom: "conv4_3/incep/2_0"
top: "conv4_3/incep/2_0"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv4_3/incep/2_0/relu"
type: "ReLU"
bottom: "conv4_3/incep/2_0"
top: "conv4_3/incep/2_0"
}
layer {
name: "conv4_3/incep/2_1/conv"
type: "Convolution"
bottom: "conv4_3/incep/2_0"
top: "conv4_3/incep/2_1"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 48
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv4_3/incep/2_1/bn"
type: "BatchNorm"
bottom: "conv4_3/incep/2_1"
top: "conv4_3/incep/2_1"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv4_3/incep/2_1/bn_scale"
type: "Scale"
bottom: "conv4_3/incep/2_1"
top: "conv4_3/incep/2_1"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv4_3/incep/2_1/relu"
type: "ReLU"
bottom: "conv4_3/incep/2_1"
top: "conv4_3/incep/2_1"
}
layer {
name: "conv4_3/incep"
type: "Concat"
bottom: "conv4_3/incep/0"
bottom: "conv4_3/incep/1_0"
bottom: "conv4_3/incep/2_1"
top: "conv4_3/incep"
}
layer {
name: "conv4_3/out/conv"
type: "Convolution"
bottom: "conv4_3/incep"
top: "conv4_3/out"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 256
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv4_3/input"
type: "Power"
bottom: "conv4_2"
top: "conv4_3/input"
power_param {
power: 1
scale: 1
shift: 0
}
}
layer {
name: "conv4_3"
type: "Eltwise"
bottom: "conv4_3/out"
bottom: "conv4_3/input"
top: "conv4_3"
eltwise_param {
operation: SUM
coeff: 1
coeff: 1
}
}
layer {
name: "conv4_4/incep/bn"
type: "BatchNorm"
bottom: "conv4_3"
top: "conv4_4/incep/pre"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv4_4/incep/bn_scale"
type: "Scale"
bottom: "conv4_4/incep/pre"
top: "conv4_4/incep/pre"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv4_4/incep/relu"
type: "ReLU"
bottom: "conv4_4/incep/pre"
top: "conv4_4/incep/pre"
}
layer {
name: "conv4_4/incep/0/conv"
type: "Convolution"
bottom: "conv4_4/incep/pre"
top: "conv4_4/incep/0"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 64
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv4_4/incep/0/bn"
type: "BatchNorm"
bottom: "conv4_4/incep/0"
top: "conv4_4/incep/0"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv4_4/incep/0/bn_scale"
type: "Scale"
bottom: "conv4_4/incep/0"
top: "conv4_4/incep/0"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv4_4/incep/0/relu"
type: "ReLU"
bottom: "conv4_4/incep/0"
top: "conv4_4/incep/0"
}
layer {
name: "conv4_4/incep/1_reduce/conv"
type: "Convolution"
bottom: "conv4_4/incep/pre"
top: "conv4_4/incep/1_reduce"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 64
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv4_4/incep/1_reduce/bn"
type: "BatchNorm"
bottom: "conv4_4/incep/1_reduce"
top: "conv4_4/incep/1_reduce"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv4_4/incep/1_reduce/bn_scale"
type: "Scale"
bottom: "conv4_4/incep/1_reduce"
top: "conv4_4/incep/1_reduce"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv4_4/incep/1_reduce/relu"
type: "ReLU"
bottom: "conv4_4/incep/1_reduce"
top: "conv4_4/incep/1_reduce"
}
layer {
name: "conv4_4/incep/1_0/conv"
type: "Convolution"
bottom: "conv4_4/incep/1_reduce"
top: "conv4_4/incep/1_0"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 128
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv4_4/incep/1_0/bn"
type: "BatchNorm"
bottom: "conv4_4/incep/1_0"
top: "conv4_4/incep/1_0"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv4_4/incep/1_0/bn_scale"
type: "Scale"
bottom: "conv4_4/incep/1_0"
top: "conv4_4/incep/1_0"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv4_4/incep/1_0/relu"
type: "ReLU"
bottom: "conv4_4/incep/1_0"
top: "conv4_4/incep/1_0"
}
layer {
name: "conv4_4/incep/2_reduce/conv"
type: "Convolution"
bottom: "conv4_4/incep/pre"
top: "conv4_4/incep/2_reduce"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 24
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv4_4/incep/2_reduce/bn"
type: "BatchNorm"
bottom: "conv4_4/incep/2_reduce"
top: "conv4_4/incep/2_reduce"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv4_4/incep/2_reduce/bn_scale"
type: "Scale"
bottom: "conv4_4/incep/2_reduce"
top: "conv4_4/incep/2_reduce"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv4_4/incep/2_reduce/relu"
type: "ReLU"
bottom: "conv4_4/incep/2_reduce"
top: "conv4_4/incep/2_reduce"
}
layer {
name: "conv4_4/incep/2_0/conv"
type: "Convolution"
bottom: "conv4_4/incep/2_reduce"
top: "conv4_4/incep/2_0"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 48
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv4_4/incep/2_0/bn"
type: "BatchNorm"
bottom: "conv4_4/incep/2_0"
top: "conv4_4/incep/2_0"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv4_4/incep/2_0/bn_scale"
type: "Scale"
bottom: "conv4_4/incep/2_0"
top: "conv4_4/incep/2_0"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv4_4/incep/2_0/relu"
type: "ReLU"
bottom: "conv4_4/incep/2_0"
top: "conv4_4/incep/2_0"
}
layer {
name: "conv4_4/incep/2_1/conv"
type: "Convolution"
bottom: "conv4_4/incep/2_0"
top: "conv4_4/incep/2_1"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 48
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv4_4/incep/2_1/bn"
type: "BatchNorm"
bottom: "conv4_4/incep/2_1"
top: "conv4_4/incep/2_1"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv4_4/incep/2_1/bn_scale"
type: "Scale"
bottom: "conv4_4/incep/2_1"
top: "conv4_4/incep/2_1"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv4_4/incep/2_1/relu"
type: "ReLU"
bottom: "conv4_4/incep/2_1"
top: "conv4_4/incep/2_1"
}
layer {
name: "conv4_4/incep"
type: "Concat"
bottom: "conv4_4/incep/0"
bottom: "conv4_4/incep/1_0"
bottom: "conv4_4/incep/2_1"
top: "conv4_4/incep"
}
layer {
name: "conv4_4/out/conv"
type: "Convolution"
bottom: "conv4_4/incep"
top: "conv4_4/out"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 256
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv4_4/input"
type: "Power"
bottom: "conv4_3"
top: "conv4_4/input"
power_param {
power: 1
scale: 1
shift: 0
}
}
layer {
name: "conv4_4"
type: "Eltwise"
bottom: "conv4_4/out"
bottom: "conv4_4/input"
top: "conv4_4"
eltwise_param {
operation: SUM
coeff: 1
coeff: 1
}
}
layer {
name: "conv5_1/incep/bn"
type: "BatchNorm"
bottom: "conv4_4"
top: "conv5_1/incep/pre"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv5_1/incep/bn_scale"
type: "Scale"
bottom: "conv5_1/incep/pre"
top: "conv5_1/incep/pre"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv5_1/incep/relu"
type: "ReLU"
bottom: "conv5_1/incep/pre"
top: "conv5_1/incep/pre"
}
layer {
name: "conv5_1/incep/0/conv"
type: "Convolution"
bottom: "conv5_1/incep/pre"
top: "conv5_1/incep/0"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 64
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 2
stride_w: 2
}
}
layer {
name: "conv5_1/incep/0/bn"
type: "BatchNorm"
bottom: "conv5_1/incep/0"
top: "conv5_1/incep/0"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv5_1/incep/0/bn_scale"
type: "Scale"
bottom: "conv5_1/incep/0"
top: "conv5_1/incep/0"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv5_1/incep/0/relu"
type: "ReLU"
bottom: "conv5_1/incep/0"
top: "conv5_1/incep/0"
}
layer {
name: "conv5_1/incep/1_reduce/conv"
type: "Convolution"
bottom: "conv5_1/incep/pre"
top: "conv5_1/incep/1_reduce"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 96
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 2
stride_w: 2
}
}
layer {
name: "conv5_1/incep/1_reduce/bn"
type: "BatchNorm"
bottom: "conv5_1/incep/1_reduce"
top: "conv5_1/incep/1_reduce"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv5_1/incep/1_reduce/bn_scale"
type: "Scale"
bottom: "conv5_1/incep/1_reduce"
top: "conv5_1/incep/1_reduce"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv5_1/incep/1_reduce/relu"
type: "ReLU"
bottom: "conv5_1/incep/1_reduce"
top: "conv5_1/incep/1_reduce"
}
layer {
name: "conv5_1/incep/1_0/conv"
type: "Convolution"
bottom: "conv5_1/incep/1_reduce"
top: "conv5_1/incep/1_0"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 192
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv5_1/incep/1_0/bn"
type: "BatchNorm"
bottom: "conv5_1/incep/1_0"
top: "conv5_1/incep/1_0"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv5_1/incep/1_0/bn_scale"
type: "Scale"
bottom: "conv5_1/incep/1_0"
top: "conv5_1/incep/1_0"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv5_1/incep/1_0/relu"
type: "ReLU"
bottom: "conv5_1/incep/1_0"
top: "conv5_1/incep/1_0"
}
layer {
name: "conv5_1/incep/2_reduce/conv"
type: "Convolution"
bottom: "conv5_1/incep/pre"
top: "conv5_1/incep/2_reduce"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 32
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 2
stride_w: 2
}
}
layer {
name: "conv5_1/incep/2_reduce/bn"
type: "BatchNorm"
bottom: "conv5_1/incep/2_reduce"
top: "conv5_1/incep/2_reduce"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv5_1/incep/2_reduce/bn_scale"
type: "Scale"
bottom: "conv5_1/incep/2_reduce"
top: "conv5_1/incep/2_reduce"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv5_1/incep/2_reduce/relu"
type: "ReLU"
bottom: "conv5_1/incep/2_reduce"
top: "conv5_1/incep/2_reduce"
}
layer {
name: "conv5_1/incep/2_0/conv"
type: "Convolution"
bottom: "conv5_1/incep/2_reduce"
top: "conv5_1/incep/2_0"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 64
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv5_1/incep/2_0/bn"
type: "BatchNorm"
bottom: "conv5_1/incep/2_0"
top: "conv5_1/incep/2_0"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv5_1/incep/2_0/bn_scale"
type: "Scale"
bottom: "conv5_1/incep/2_0"
top: "conv5_1/incep/2_0"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv5_1/incep/2_0/relu"
type: "ReLU"
bottom: "conv5_1/incep/2_0"
top: "conv5_1/incep/2_0"
}
layer {
name: "conv5_1/incep/2_1/conv"
type: "Convolution"
bottom: "conv5_1/incep/2_0"
top: "conv5_1/incep/2_1"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 64
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv5_1/incep/2_1/bn"
type: "BatchNorm"
bottom: "conv5_1/incep/2_1"
top: "conv5_1/incep/2_1"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv5_1/incep/2_1/bn_scale"
type: "Scale"
bottom: "conv5_1/incep/2_1"
top: "conv5_1/incep/2_1"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv5_1/incep/2_1/relu"
type: "ReLU"
bottom: "conv5_1/incep/2_1"
top: "conv5_1/incep/2_1"
}
layer {
name: "conv5_1/incep/pool"
type: "Pooling"
bottom: "conv5_1/incep/pre"
top: "conv5_1/incep/pool"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
pad: 0
}
}
layer {
name: "conv5_1/incep/poolproj/conv"
type: "Convolution"
bottom: "conv5_1/incep/pool"
top: "conv5_1/incep/poolproj"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 128
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv5_1/incep/poolproj/bn"
type: "BatchNorm"
bottom: "conv5_1/incep/poolproj"
top: "conv5_1/incep/poolproj"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv5_1/incep/poolproj/bn_scale"
type: "Scale"
bottom: "conv5_1/incep/poolproj"
top: "conv5_1/incep/poolproj"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv5_1/incep/poolproj/relu"
type: "ReLU"
bottom: "conv5_1/incep/poolproj"
top: "conv5_1/incep/poolproj"
}
layer {
name: "conv5_1/incep"
type: "Concat"
bottom: "conv5_1/incep/0"
bottom: "conv5_1/incep/1_0"
bottom: "conv5_1/incep/2_1"
bottom: "conv5_1/incep/poolproj"
top: "conv5_1/incep"
}
layer {
name: "conv5_1/out/conv"
type: "Convolution"
bottom: "conv5_1/incep"
top: "conv5_1/out"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 384
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv5_1/proj"
type: "Convolution"
bottom: "conv4_4"
top: "conv5_1/proj"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 384
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 2
stride_w: 2
}
}
layer {
name: "conv5_1"
type: "Eltwise"
bottom: "conv5_1/out"
bottom: "conv5_1/proj"
top: "conv5_1"
eltwise_param {
operation: SUM
coeff: 1
coeff: 1
}
}
layer {
name: "conv5_2/incep/bn"
type: "BatchNorm"
bottom: "conv5_1"
top: "conv5_2/incep/pre"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv5_2/incep/bn_scale"
type: "Scale"
bottom: "conv5_2/incep/pre"
top: "conv5_2/incep/pre"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv5_2/incep/relu"
type: "ReLU"
bottom: "conv5_2/incep/pre"
top: "conv5_2/incep/pre"
}
layer {
name: "conv5_2/incep/0/conv"
type: "Convolution"
bottom: "conv5_2/incep/pre"
top: "conv5_2/incep/0"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 64
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv5_2/incep/0/bn"
type: "BatchNorm"
bottom: "conv5_2/incep/0"
top: "conv5_2/incep/0"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv5_2/incep/0/bn_scale"
type: "Scale"
bottom: "conv5_2/incep/0"
top: "conv5_2/incep/0"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv5_2/incep/0/relu"
type: "ReLU"
bottom: "conv5_2/incep/0"
top: "conv5_2/incep/0"
}
layer {
name: "conv5_2/incep/1_reduce/conv"
type: "Convolution"
bottom: "conv5_2/incep/pre"
top: "conv5_2/incep/1_reduce"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 96
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv5_2/incep/1_reduce/bn"
type: "BatchNorm"
bottom: "conv5_2/incep/1_reduce"
top: "conv5_2/incep/1_reduce"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv5_2/incep/1_reduce/bn_scale"
type: "Scale"
bottom: "conv5_2/incep/1_reduce"
top: "conv5_2/incep/1_reduce"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv5_2/incep/1_reduce/relu"
type: "ReLU"
bottom: "conv5_2/incep/1_reduce"
top: "conv5_2/incep/1_reduce"
}
layer {
name: "conv5_2/incep/1_0/conv"
type: "Convolution"
bottom: "conv5_2/incep/1_reduce"
top: "conv5_2/incep/1_0"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 192
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv5_2/incep/1_0/bn"
type: "BatchNorm"
bottom: "conv5_2/incep/1_0"
top: "conv5_2/incep/1_0"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv5_2/incep/1_0/bn_scale"
type: "Scale"
bottom: "conv5_2/incep/1_0"
top: "conv5_2/incep/1_0"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv5_2/incep/1_0/relu"
type: "ReLU"
bottom: "conv5_2/incep/1_0"
top: "conv5_2/incep/1_0"
}
layer {
name: "conv5_2/incep/2_reduce/conv"
type: "Convolution"
bottom: "conv5_2/incep/pre"
top: "conv5_2/incep/2_reduce"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 32
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv5_2/incep/2_reduce/bn"
type: "BatchNorm"
bottom: "conv5_2/incep/2_reduce"
top: "conv5_2/incep/2_reduce"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv5_2/incep/2_reduce/bn_scale"
type: "Scale"
bottom: "conv5_2/incep/2_reduce"
top: "conv5_2/incep/2_reduce"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv5_2/incep/2_reduce/relu"
type: "ReLU"
bottom: "conv5_2/incep/2_reduce"
top: "conv5_2/incep/2_reduce"
}
layer {
name: "conv5_2/incep/2_0/conv"
type: "Convolution"
bottom: "conv5_2/incep/2_reduce"
top: "conv5_2/incep/2_0"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 64
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv5_2/incep/2_0/bn"
type: "BatchNorm"
bottom: "conv5_2/incep/2_0"
top: "conv5_2/incep/2_0"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv5_2/incep/2_0/bn_scale"
type: "Scale"
bottom: "conv5_2/incep/2_0"
top: "conv5_2/incep/2_0"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv5_2/incep/2_0/relu"
type: "ReLU"
bottom: "conv5_2/incep/2_0"
top: "conv5_2/incep/2_0"
}
layer {
name: "conv5_2/incep/2_1/conv"
type: "Convolution"
bottom: "conv5_2/incep/2_0"
top: "conv5_2/incep/2_1"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 64
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv5_2/incep/2_1/bn"
type: "BatchNorm"
bottom: "conv5_2/incep/2_1"
top: "conv5_2/incep/2_1"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv5_2/incep/2_1/bn_scale"
type: "Scale"
bottom: "conv5_2/incep/2_1"
top: "conv5_2/incep/2_1"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv5_2/incep/2_1/relu"
type: "ReLU"
bottom: "conv5_2/incep/2_1"
top: "conv5_2/incep/2_1"
}
layer {
name: "conv5_2/incep"
type: "Concat"
bottom: "conv5_2/incep/0"
bottom: "conv5_2/incep/1_0"
bottom: "conv5_2/incep/2_1"
top: "conv5_2/incep"
}
layer {
name: "conv5_2/out/conv"
type: "Convolution"
bottom: "conv5_2/incep"
top: "conv5_2/out"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 384
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv5_2/input"
type: "Power"
bottom: "conv5_1"
top: "conv5_2/input"
power_param {
power: 1
scale: 1
shift: 0
}
}
layer {
name: "conv5_2"
type: "Eltwise"
bottom: "conv5_2/out"
bottom: "conv5_2/input"
top: "conv5_2"
eltwise_param {
operation: SUM
coeff: 1
coeff: 1
}
}
layer {
name: "conv5_3/incep/bn"
type: "BatchNorm"
bottom: "conv5_2"
top: "conv5_3/incep/pre"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv5_3/incep/bn_scale"
type: "Scale"
bottom: "conv5_3/incep/pre"
top: "conv5_3/incep/pre"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv5_3/incep/relu"
type: "ReLU"
bottom: "conv5_3/incep/pre"
top: "conv5_3/incep/pre"
}
layer {
name: "conv5_3/incep/0/conv"
type: "Convolution"
bottom: "conv5_3/incep/pre"
top: "conv5_3/incep/0"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 64
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv5_3/incep/0/bn"
type: "BatchNorm"
bottom: "conv5_3/incep/0"
top: "conv5_3/incep/0"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv5_3/incep/0/bn_scale"
type: "Scale"
bottom: "conv5_3/incep/0"
top: "conv5_3/incep/0"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv5_3/incep/0/relu"
type: "ReLU"
bottom: "conv5_3/incep/0"
top: "conv5_3/incep/0"
}
layer {
name: "conv5_3/incep/1_reduce/conv"
type: "Convolution"
bottom: "conv5_3/incep/pre"
top: "conv5_3/incep/1_reduce"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 96
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv5_3/incep/1_reduce/bn"
type: "BatchNorm"
bottom: "conv5_3/incep/1_reduce"
top: "conv5_3/incep/1_reduce"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv5_3/incep/1_reduce/bn_scale"
type: "Scale"
bottom: "conv5_3/incep/1_reduce"
top: "conv5_3/incep/1_reduce"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv5_3/incep/1_reduce/relu"
type: "ReLU"
bottom: "conv5_3/incep/1_reduce"
top: "conv5_3/incep/1_reduce"
}
layer {
name: "conv5_3/incep/1_0/conv"
type: "Convolution"
bottom: "conv5_3/incep/1_reduce"
top: "conv5_3/incep/1_0"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 192
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv5_3/incep/1_0/bn"
type: "BatchNorm"
bottom: "conv5_3/incep/1_0"
top: "conv5_3/incep/1_0"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv5_3/incep/1_0/bn_scale"
type: "Scale"
bottom: "conv5_3/incep/1_0"
top: "conv5_3/incep/1_0"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv5_3/incep/1_0/relu"
type: "ReLU"
bottom: "conv5_3/incep/1_0"
top: "conv5_3/incep/1_0"
}
layer {
name: "conv5_3/incep/2_reduce/conv"
type: "Convolution"
bottom: "conv5_3/incep/pre"
top: "conv5_3/incep/2_reduce"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 32
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv5_3/incep/2_reduce/bn"
type: "BatchNorm"
bottom: "conv5_3/incep/2_reduce"
top: "conv5_3/incep/2_reduce"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv5_3/incep/2_reduce/bn_scale"
type: "Scale"
bottom: "conv5_3/incep/2_reduce"
top: "conv5_3/incep/2_reduce"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv5_3/incep/2_reduce/relu"
type: "ReLU"
bottom: "conv5_3/incep/2_reduce"
top: "conv5_3/incep/2_reduce"
}
layer {
name: "conv5_3/incep/2_0/conv"
type: "Convolution"
bottom: "conv5_3/incep/2_reduce"
top: "conv5_3/incep/2_0"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 64
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv5_3/incep/2_0/bn"
type: "BatchNorm"
bottom: "conv5_3/incep/2_0"
top: "conv5_3/incep/2_0"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv5_3/incep/2_0/bn_scale"
type: "Scale"
bottom: "conv5_3/incep/2_0"
top: "conv5_3/incep/2_0"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv5_3/incep/2_0/relu"
type: "ReLU"
bottom: "conv5_3/incep/2_0"
top: "conv5_3/incep/2_0"
}
layer {
name: "conv5_3/incep/2_1/conv"
type: "Convolution"
bottom: "conv5_3/incep/2_0"
top: "conv5_3/incep/2_1"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 64
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv5_3/incep/2_1/bn"
type: "BatchNorm"
bottom: "conv5_3/incep/2_1"
top: "conv5_3/incep/2_1"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv5_3/incep/2_1/bn_scale"
type: "Scale"
bottom: "conv5_3/incep/2_1"
top: "conv5_3/incep/2_1"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv5_3/incep/2_1/relu"
type: "ReLU"
bottom: "conv5_3/incep/2_1"
top: "conv5_3/incep/2_1"
}
layer {
name: "conv5_3/incep"
type: "Concat"
bottom: "conv5_3/incep/0"
bottom: "conv5_3/incep/1_0"
bottom: "conv5_3/incep/2_1"
top: "conv5_3/incep"
}
layer {
name: "conv5_3/out/conv"
type: "Convolution"
bottom: "conv5_3/incep"
top: "conv5_3/out"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
convolution_param {
num_output: 384
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv5_3/input"
type: "Power"
bottom: "conv5_2"
top: "conv5_3/input"
power_param {
power: 1
scale: 1
shift: 0
}
}
layer {
name: "conv5_3"
type: "Eltwise"
bottom: "conv5_3/out"
bottom: "conv5_3/input"
top: "conv5_3"
eltwise_param {
operation: SUM
coeff: 1
coeff: 1
}
}
layer {
name: "conv5_4/incep/bn"
type: "BatchNorm"
bottom: "conv5_3"
top: "conv5_4/incep/pre"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv5_4/incep/bn_scale"
type: "Scale"
bottom: "conv5_4/incep/pre"
top: "conv5_4/incep/pre"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv5_4/incep/relu"
type: "ReLU"
bottom: "conv5_4/incep/pre"
top: "conv5_4/incep/pre"
}
layer {
name: "conv5_4/incep/0/conv"
type: "Convolution"
bottom: "conv5_4/incep/pre"
top: "conv5_4/incep/0"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 64
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv5_4/incep/0/bn"
type: "BatchNorm"
bottom: "conv5_4/incep/0"
top: "conv5_4/incep/0"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv5_4/incep/0/bn_scale"
type: "Scale"
bottom: "conv5_4/incep/0"
top: "conv5_4/incep/0"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv5_4/incep/0/relu"
type: "ReLU"
bottom: "conv5_4/incep/0"
top: "conv5_4/incep/0"
}
layer {
name: "conv5_4/incep/1_reduce/conv"
type: "Convolution"
bottom: "conv5_4/incep/pre"
top: "conv5_4/incep/1_reduce"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 96
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv5_4/incep/1_reduce/bn"
type: "BatchNorm"
bottom: "conv5_4/incep/1_reduce"
top: "conv5_4/incep/1_reduce"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv5_4/incep/1_reduce/bn_scale"
type: "Scale"
bottom: "conv5_4/incep/1_reduce"
top: "conv5_4/incep/1_reduce"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv5_4/incep/1_reduce/relu"
type: "ReLU"
bottom: "conv5_4/incep/1_reduce"
top: "conv5_4/incep/1_reduce"
}
layer {
name: "conv5_4/incep/1_0/conv"
type: "Convolution"
bottom: "conv5_4/incep/1_reduce"
top: "conv5_4/incep/1_0"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 192
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv5_4/incep/1_0/bn"
type: "BatchNorm"
bottom: "conv5_4/incep/1_0"
top: "conv5_4/incep/1_0"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv5_4/incep/1_0/bn_scale"
type: "Scale"
bottom: "conv5_4/incep/1_0"
top: "conv5_4/incep/1_0"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv5_4/incep/1_0/relu"
type: "ReLU"
bottom: "conv5_4/incep/1_0"
top: "conv5_4/incep/1_0"
}
layer {
name: "conv5_4/incep/2_reduce/conv"
type: "Convolution"
bottom: "conv5_4/incep/pre"
top: "conv5_4/incep/2_reduce"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 32
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv5_4/incep/2_reduce/bn"
type: "BatchNorm"
bottom: "conv5_4/incep/2_reduce"
top: "conv5_4/incep/2_reduce"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv5_4/incep/2_reduce/bn_scale"
type: "Scale"
bottom: "conv5_4/incep/2_reduce"
top: "conv5_4/incep/2_reduce"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv5_4/incep/2_reduce/relu"
type: "ReLU"
bottom: "conv5_4/incep/2_reduce"
top: "conv5_4/incep/2_reduce"
}
layer {
name: "conv5_4/incep/2_0/conv"
type: "Convolution"
bottom: "conv5_4/incep/2_reduce"
top: "conv5_4/incep/2_0"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 64
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv5_4/incep/2_0/bn"
type: "BatchNorm"
bottom: "conv5_4/incep/2_0"
top: "conv5_4/incep/2_0"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv5_4/incep/2_0/bn_scale"
type: "Scale"
bottom: "conv5_4/incep/2_0"
top: "conv5_4/incep/2_0"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv5_4/incep/2_0/relu"
type: "ReLU"
bottom: "conv5_4/incep/2_0"
top: "conv5_4/incep/2_0"
}
layer {
name: "conv5_4/incep/2_1/conv"
type: "Convolution"
bottom: "conv5_4/incep/2_0"
top: "conv5_4/incep/2_1"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 64
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 1
pad_w: 1
kernel_h: 3
kernel_w: 3
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv5_4/incep/2_1/bn"
type: "BatchNorm"
bottom: "conv5_4/incep/2_1"
top: "conv5_4/incep/2_1"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv5_4/incep/2_1/bn_scale"
type: "Scale"
bottom: "conv5_4/incep/2_1"
top: "conv5_4/incep/2_1"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv5_4/incep/2_1/relu"
type: "ReLU"
bottom: "conv5_4/incep/2_1"
top: "conv5_4/incep/2_1"
}
layer {
name: "conv5_4/incep"
type: "Concat"
bottom: "conv5_4/incep/0"
bottom: "conv5_4/incep/1_0"
bottom: "conv5_4/incep/2_1"
top: "conv5_4/incep"
}
layer {
name: "conv5_4/out/conv"
type: "Convolution"
bottom: "conv5_4/incep"
top: "conv5_4/out"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 384
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 0
pad_w: 0
kernel_h: 1
kernel_w: 1
stride_h: 1
stride_w: 1
}
}
layer {
name: "conv5_4/out/bn"
type: "BatchNorm"
bottom: "conv5_4/out"
top: "conv5_4/out"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv5_4/out/bn_scale"
type: "Scale"
bottom: "conv5_4/out"
top: "conv5_4/out"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv5_4/input"
type: "Power"
bottom: "conv5_3"
top: "conv5_4/input"
power_param {
power: 1
scale: 1
shift: 0
}
}
layer {
name: "conv5_4"
type: "Eltwise"
bottom: "conv5_4/out"
bottom: "conv5_4/input"
top: "conv5_4"
eltwise_param {
operation: SUM
coeff: 1
coeff: 1
}
}
layer {
name: "conv5_4/last_bn"
type: "BatchNorm"
bottom: "conv5_4"
top: "conv5_4"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv5_4/last_bn_scale"
type: "Scale"
bottom: "conv5_4"
top: "conv5_4"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv5_4/last_relu"
type: "ReLU"
bottom: "conv5_4"
top: "conv5_4"
}

hyper feature

layer {
name: "downsample"
type: "Pooling"
bottom: "conv3_4"
top: "downsample"
pooling_param { kernel_size: 3 stride: 2 pad: 0 pool: MAX }
}
layer {
name: "upsample"
type: "Deconvolution"
bottom: "conv5_4"
top: "upsample"
param { lr_mult: 0 decay_mult: 0}
convolution_param {
num_output: 384 kernel_size: 4 pad: 1 stride: 2 group: 384
weight_filler: {type: "bilinear" }
bias_term: false
}
}
layer {
name: "concat"
bottom: "downsample"
bottom: "conv4_4"
bottom: "upsample"
top: "concat"
type: "Concat"
concat_param { axis: 1 }
}

layer {
name: "convf_rpn"
type: "Convolution"
bottom: "concat"
top: "convf_rpn"
param { lr_mult: 1.0 decay_mult: 1.0 }
param { lr_mult: 2.0 decay_mult: 0 }
convolution_param {
num_output: 128 kernel_size: 1 pad: 0 stride: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "reluf_rpn"
type: "ReLU"
bottom: "convf_rpn"
top: "convf_rpn"
}

layer {
name: "convf_2"
type: "Convolution"
bottom: "concat"
top: "convf_2"
param { lr_mult: 1.0 decay_mult: 1.0 }
param { lr_mult: 2.0 decay_mult: 0 }
convolution_param {
num_output: 384 kernel_size: 1 pad: 0 stride: 1
weight_filler { type: "xavier" std: 0.1 }
bias_filler { type: "constant" value: 0.1 }
}
}
layer {
name: "reluf_2"
type: "ReLU"
bottom: "convf_2"
top: "convf_2"
}

layer {
name: "concat_convf"
bottom: "convf_rpn"
bottom: "convf_2"
top: "convf"
type: "Concat"
concat_param { axis: 1 }
}

################################################################################

RPN

################################################################################

RPN conv

layer {
name: "rpn_conv1"
type: "Convolution"
bottom: "convf_rpn"
top: "rpn_conv1"
param { lr_mult: 1.0 decay_mult: 1.0 }
param { lr_mult: 2.0 decay_mult: 0 }
convolution_param {
num_output: 384 kernel_size: 3 pad: 1 stride: 1
weight_filler { type: "gaussian" std: 0.01 }
bias_filler { type: "constant" value: 0 }
}
}
layer {
name: "rpn_relu1"
type: "ReLU"
bottom: "rpn_conv1"
top: "rpn_conv1"
}
layer {
name: "rpn_cls_score"
type: "Convolution"
bottom: "rpn_conv1"
top: "rpn_cls_score"
param { lr_mult: 1.0 decay_mult: 1.0 }
param { lr_mult: 2.0 decay_mult: 0 }
convolution_param {
num_output: 84 # 2(bg/fg) * 42(anchors)
kernel_size: 1 pad: 0 stride: 1
weight_filler { type: "gaussian" std: 0.01 }
bias_filler { type: "constant" value: 0 }
}
}
layer {
name: "rpn_bbox_pred"
type: "Convolution"
bottom: "rpn_conv1"
top: "rpn_bbox_pred"
param { lr_mult: 1.0 decay_mult: 1.0 }
param { lr_mult: 2.0 decay_mult: 0 }
convolution_param {
num_output: 168 # 4 * 42(anchors)
kernel_size: 1 pad: 0 stride: 1
weight_filler { type: "gaussian" std: 0.01 }
bias_filler { type: "constant" value: 0 }
}
}
layer {
bottom: "rpn_cls_score"
top: "rpn_cls_score_reshape"
name: "rpn_cls_score_reshape"
type: "Reshape"
reshape_param { shape { dim: 0 dim: 2 dim: -1 dim: 0 } }
}
layer {
name: 'rpn-data'
type: 'Python'
bottom: 'rpn_cls_score'
bottom: 'gt_boxes'
bottom: 'im_info'
bottom: 'data'
top: 'rpn_labels'
top: 'rpn_bbox_targets'
top: 'rpn_bbox_inside_weights'
top: 'rpn_bbox_outside_weights'
include { phase: TRAIN }
python_param {
module: 'rpn.anchor_target_layer'
layer: 'AnchorTargetLayer'
param_str: "{'feat_stride': 16, 'ratios': [0.333, 0.5, 0.667, 1, 1.5, 2, 3], 'scales': [2, 3, 5, 9, 16, 32]}"
}
}
layer {
name: "rpn_loss_cls"
type: "SoftmaxWithLoss"
bottom: "rpn_cls_score_reshape"
bottom: "rpn_labels"
propagate_down: 1
propagate_down: 0
top: "rpn_loss_cls"
include { phase: TRAIN }
loss_weight: 1
loss_param { ignore_label: -1 normalize: true }
}
layer {
name: "rpn_loss_bbox"
type: "SmoothL1Loss"
bottom: "rpn_bbox_pred"
bottom: "rpn_bbox_targets"
bottom: "rpn_bbox_inside_weights"
bottom: "rpn_bbox_outside_weights"
top: "rpn_loss_bbox"
include { phase: TRAIN }
loss_weight: 1
smooth_l1_loss_param { sigma: 3.0 }
}

################################################################################

Proposal

################################################################################
layer {
name: "rpn_cls_prob"
type: "Softmax"
bottom: "rpn_cls_score_reshape"
top: "rpn_cls_prob"
}
layer {
name: 'rpn_cls_prob_reshape'
type: 'Reshape'
bottom: 'rpn_cls_prob'
top: 'rpn_cls_prob_reshape'
reshape_param { shape { dim: 0 dim: 84 dim: -1 dim: 0 } }
}
layer {
name: 'proposal'
type: 'Python'
bottom: 'rpn_cls_prob_reshape'
bottom: 'rpn_bbox_pred'
bottom: 'im_info'
bottom: 'gt_boxes'
top: 'rois'
top: 'labels'
top: 'bbox_targets'
top: 'bbox_inside_weights'
top: 'bbox_outside_weights'
include { phase: TRAIN }
python_param {
module: 'rpn.proposal_layer'
layer: 'ProposalLayer2'
param_str: "{'feat_stride': 16, 'num_classes': 21, 'ratios': [0.333, 0.5, 0.667, 1, 1.5, 2, 3], 'scales': [2, 3, 5, 9, 16, 32]}"
}
}
layer {
name: 'proposal'
type: 'Python'
bottom: 'rpn_cls_prob_reshape'
bottom: 'rpn_bbox_pred'
bottom: 'im_info'
top: 'rois'
top: 'scores'
include { phase: TEST }
python_param {
module: 'rpn.proposal_layer'
layer: 'ProposalLayer'
param_str: "{'feat_stride': 16, 'ratios': [0.333, 0.5, 0.667, 1, 1.5, 2, 3], 'scales': [2, 3, 5, 9, 16, 32]}"
}
}

################################################################################

RCNN

################################################################################
layer {
name: "roi_pool_conv5"
type: "ROIPooling"
bottom: "convf"
bottom: "rois"
top: "roi_pool_conv5"
roi_pooling_param {
pooled_w: 6
pooled_h: 6
spatial_scale: 0.0625 # 1/16
}
}
layer {
name: "fc6"
type: "InnerProduct"
bottom: "roi_pool_conv5"
top: "fc6"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
inner_product_param {
num_output: 4096
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
}
}
layer {
name: "fc6/bn"
type: "BatchNorm"
bottom: "fc6"
top: "fc6"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "fc6/scale"
type: "Scale"
bottom: "fc6"
top: "fc6"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "fc6/dropout"
type: "Dropout"
bottom: "fc6"
top: "fc6"
dropout_param {
dropout_ratio: 0.25
}
}
layer {
name: "fc6/relu"
type: "ReLU"
bottom: "fc6"
top: "fc6"
}
layer {
name: "fc7"
type: "InnerProduct"
bottom: "fc6"
top: "fc7"
param {
lr_mult: 1.0
decay_mult: 1.0
}
param {
lr_mult: 2.0
decay_mult: 0.0
}
inner_product_param {
num_output: 4096
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
value: 0.1
}
}
}
layer {
name: "fc7/bn"
type: "BatchNorm"
bottom: "fc7"
top: "fc7"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "fc7/scale"
type: "Scale"
bottom: "fc7"
top: "fc7"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 1.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "fc7/dropout"
type: "Dropout"
bottom: "fc7"
top: "fc7"
dropout_param {
dropout_ratio: 0.25
}
}
layer {
name: "fc7/relu"
type: "ReLU"
bottom: "fc7"
top: "fc7"
}
layer {
name: "cls_score"
type: "InnerProduct"
bottom: "fc7"
top: "cls_score"
param { lr_mult: 1.0 }
param { lr_mult: 2.0 }
inner_product_param {
num_output: 21
weight_filler { type: "gaussian" std: 0.01 }
bias_filler { type: "constant" value: 0 }
}
}
layer {
name: "bbox_pred"
type: "InnerProduct"
bottom: "fc7"
top: "bbox_pred"
param { lr_mult: 1.0 }
param { lr_mult: 2.0 }
inner_product_param {
num_output: 84
weight_filler { type: "gaussian" std: 0.001 }
bias_filler { type: "constant" value: 0 }
}
}
layer {
name: "loss_cls"
type: "SoftmaxWithLoss"
bottom: "cls_score"
bottom: "labels"
propagate_down: 1
propagate_down: 0
top: "loss_cls"
include { phase: TRAIN }
loss_weight: 1
loss_param { ignore_label: -1 normalize: true }
}
layer {
name: "loss_bbox"
type: "SmoothL1Loss"
bottom: "bbox_pred"
bottom: "bbox_targets"
bottom: "bbox_inside_weights"
bottom: "bbox_outside_weights"
top: "loss_bbox"
include { phase: TRAIN }
loss_weight: 1
}
layer {
name: "cls_prob"
type: "Softmax"
bottom: "cls_score"
top: "cls_prob"
include { phase: TEST }
loss_param {
ignore_label: -1
normalize: true
}
}

IndexError: too many indices for array

I follow the original step in https://github.com/sanghoon/pva-faster-rcnn and run the demo(PVANET+ on PASCAL VOC 2007), then i got the error:
Traceback (most recent call last):
File "./tools/test_net.py", line 90, in
test_net(net, imdb, max_per_image=args.max_per_image, vis=args.vis)
File "../lib/fast_rcnn/test.py", line 319, in test_net
imdb.evaluate_detections(all_boxes, output_dir)
File "../lib/datasets/pascal_voc.py", line 322, in evaluate_detections
self._do_python_eval(output_dir)
File "../lib/datasets/pascal_voc.py", line 285, in _do_python_eval
use_07_metric=use_07_metric)
File "../lib/datasets/voc_eval.py", line 148, in voc_eval
BB = BB[sorted_ind, :]
IndexError: too many indices for array
i have try the solution in #8 ,but it does not work ,Anyones faced this error? Could you please give me the solution?
Thank you,

I found a bug(it seems) about the image scale rate~

Dear sanghoon: ^_^
@sanghoon
I found a bug(i am not sure) list below:

in file test.py:
im_scale_factors.append(np.array([im_scale_x, im_scale_y, im_scale_x, im_scale_y]))

blobs['im_info'] = np.array(
[np.hstack((im_blob.shape[2], im_blob.shape[3], im_scales[0]))],
dtype=np.float32)

which means that blobs['im_info'] = [height(0), width(1), im_scale_x(2), im_scale_y(3), im_scale_x(4), im_scale_y(5)], "(number)" stand for the array's index started from zero

in file proposal_layer.cpp's Forward_cpu(maybe Forward_gpu also have this issue):
// input image height & width
const Dtype img_H = p_img_info_cpu[0];
const Dtype img_W = p_img_info_cpu[1];
// scale factor for height & width
const Dtype scale_H = p_img_info_cpu[2]; <---- it seems here should be p_img_info_cpu[3]
const Dtype scale_W = p_img_info_cpu[3];
<---- it seems here should be p_img_info_cpu[2]

Although this is a bug, but it seems affect not too much, because scale_H and scale_W's value is always similar, and it seems scale_H and scale_W are only affect the minimal size(min_box_H、min_box_W) of the bounding box. i wonder is it a bug? ......

train my data with pva(2 categories)

@sanghoon dear sanghoon,I got a problem in training my own data with your pva-frcnn.When I use example_train_384,my modification:
train.prototxt line11 num_classes : 21 to 2
line6511 num_output84 to 8(When I change this,the train.py can't run) .Check failed: bottom[0]->channels() == bottom[1]->channels() (8 vs. 84)
just keep the 84,when the iteration goes to 9980,then:
File "./tools/train_net.py", line 112, in
max_iters=args.max_iters)
File "/home/amax/JS/20161123/pva-faster-rcnn/tools/../lib/fast_rcnn/train.py", line 160, in train_net
model_paths = sw.train_model(max_iters)
File "/home/amax/JS/20161123/pva-faster-rcnn/tools/../lib/fast_rcnn/train.py", line 111, in train_model
model_paths.append(self.snapshot())
File "/home/amax/JS/20161123/pva-faster-rcnn/tools/../lib/fast_rcnn/train.py", line 73, in snapshot
self.bbox_stds[:, np.newaxis])
ValueError: operands could not be broadcast together with shapes (84,4096) (8,1)
I trained fasterrcnn(vgg,rfcn) before. What should I do?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.