Comments (13)
In principle, it is. Cuda-convnet uses one config file for each model.
diff imagenet.prototxt imagenet_deploy.prototxt
< name: "CaffeNet"
< layers {
< layer {
< name: "data"
< type: "data"
< source: "/home/jiayq/Data/ILSVRC12/train-leveldb"
< meanfile: "/home/jiayq/Data/ILSVRC12/image_mean.binaryproto"
< batchsize: 256
< cropsize: 227
< mirror: true
< }
< top: "data"
< top: "label"
< }
---
> input: "data"
> input_dim: 10
> input_dim: 3
> input_dim: 227
> input_dim: 227
359,360c350,351
< name: "loss"
< type: "softmax_loss"
---
> name: "prob"
> type: "softmax"
363c354
< bottom: "label"
---
> top: "prob"
softmax_loss may need to be split into softmax plus independent loss function.
diff imagenet.prototxt imagenet_val.prototxt
6c6
< source: "/home/jiayq/Data/ILSVRC12/train-leveldb"
---
> source: "/home/jiayq/Data/ILSVRC12/val-leveldb"
8c8
< batchsize: 256
---
> batchsize: 50
10c10
< mirror: true
---
> mirror: false
22,33d21
< weight_filler {
< type: "gaussian"
< std: 0.01
< }
< bias_filler {
< type: "constant"
< value: 0.
< }
< blobs_lr: 1.
< blobs_lr: 2.
< weight_decay: 1.
< weight_decay: 0.
84,95d71
< weight_filler {
< type: "gaussian"
< std: 0.01
< }
< bias_filler {
< type: "constant"
< value: 1.
< }
< blobs_lr: 1.
< blobs_lr: 2.
< weight_decay: 1.
< weight_decay: 0.
145,156d120
< weight_filler {
< type: "gaussian"
< std: 0.01
< }
< bias_filler {
< type: "constant"
< value: 0.
< }
< blobs_lr: 1.
< blobs_lr: 2.
< weight_decay: 1.
< weight_decay: 0.
185,196d148
< weight_filler {
< type: "gaussian"
< std: 0.01
< }
< bias_filler {
< type: "constant"
< value: 1.
< }
< blobs_lr: 1.
< blobs_lr: 2.
< weight_decay: 1.
< weight_decay: 0.
225,236d176
< weight_filler {
< type: "gaussian"
< std: 0.01
< }
< bias_filler {
< type: "constant"
< value: 1.
< }
< blobs_lr: 1.
< blobs_lr: 2.
< weight_decay: 1.
< weight_decay: 0.
265,276d204
< weight_filler {
< type: "gaussian"
< std: 0.005
< }
< bias_filler {
< type: "constant"
< value: 1.
< }
< blobs_lr: 1.
< blobs_lr: 2.
< weight_decay: 1.
< weight_decay: 0.
303,314d230
< weight_filler {
< type: "gaussian"
< std: 0.005
< }
< bias_filler {
< type: "constant"
< value: 1.
< }
< blobs_lr: 1.
< blobs_lr: 2.
< weight_decay: 1.
< weight_decay: 0.
341,352d256
< weight_filler {
< type: "gaussian"
< std: 0.01
< }
< bias_filler {
< type: "constant"
< value: 0
< }
< blobs_lr: 1.
< blobs_lr: 2.
< weight_decay: 1.
< weight_decay: 0.
359,360c263,264
< name: "loss"
< type: "softmax_loss"
---
> name: "prob"
> type: "softmax"
362a267,274
> top: "prob"
> }
> layers {
> layer {
> name: "accuracy"
> type: "accuracy"
> }
> bottom: "prob"
363a276
> top: "accuracy"
imagenet_val.prototxt has the same layer as the imagenet.prototxt but does not use the optimization parameters blobs_lr, weight_decay, weight_filler, bias_filler. It is ok for test to ignore these fields.
The fields source, batchsize, mirror have conflict values. Just add a prefix train_ or test_ before each of them.
The softmax_loss vs softmax issue has appeared in the output of "diff imagenet.prototxt imagenet_deploy.prototxt".
val.prototxt has an extra accuracy layer. It is necessary to add a field to indicate specific layers are only used in one or two of the training, testing and deployment stages.
from caffe.
Although it will be possible to define one prototxt for both, I think the current separation allows more flexibility, and it is easy for the code to read and process them the network definitions. Although a network verification (making sure training and test are compatible) would be nice. If there were a join prototxt, then the code will need to interpret it differently for each case. For instance the deploy cases will became more difficult to handle.
@Yangqing what do you think about this?
from caffe.
I actually like the consolidation idea - having a way to consolidate
multiple protobuf files would allow us to reduce redundancy, since many of
them are actually pretty similar. This being said, I don't have a good idea
on how to do this in the scope of protobuf (they do not allow e.g. #include
type thing). Any suggestions are welcome.
Yangqing
On Sun, Jan 26, 2014 at 10:24 PM, Sergio Guadarrama <
[email protected]> wrote:
Although it will be possible to define one prototxt for both, I think the
current separation allows more flexibility, and it is easy for the code to
read and process them the network definitions. Although a network
verification (making sure training and test are compatible) would be nice.
If there were a join prototxt, then the code will need to interpret it
differently for each case. For instance the deploy cases will became more
difficult to handle.@Yangqing https://github.com/Yangqing what do you think about this?
—
Reply to this email directly or view it on GitHubhttps://github.com//issues/57#issuecomment-33344460
.
from caffe.
One idea could be to add into either the LayerConnection or LayerParameter proto (not sure which would be more natural) a field "repeated string phase" (or maybe enum). If empty, the layer is used in all phases; if specified, the layer is ignored for all phases except the specified one. Then in imagenet.prototxt we specify, for example, two data layers with different phases:
layers { layer { name: "data" type: "data" source: "/home/jiayq/Data/ILSVRC12/train-leveldb" meanfile: "/home/jiayq/Data/ILSVRC12/image_mean.binaryproto" batchsize: 256 cropsize: 227 mirror: true } top: "data" top: "label" phase: "train" } layers { layer { name: "data" type: "data" source: "/home/jiayq/Data/ILSVRC12/val-leveldb" meanfile: "/home/jiayq/Data/ILSVRC12/image_mean.binaryproto" batchsize: 50 cropsize: 227 mirror: false } top: "data" top: "label" phase: "val" }
...
layers { layer { name: "loss" type: "softmax_loss" } bottom: "fc8" bottom: "label" phase: "train" } layers { layer { name: "prob" type: "softmax" } bottom: "fc8" top: "prob" phase: "val" phase: "deploy" } layers { layer { name: "accuracy" type: "accuracy" } bottom: "prob" bottom: "label" top: "accuracy" phase: "val" }
from caffe.
I like @jeffdonahue's proposal a lot. It's concise and the meaning is clear and I don't think it would complicate net construction much. I could try for a PR next week.
from caffe.
Another possibility will be to separate the network architecture from the training and and testing parameters/layers and then do an explicit include and merge, which will read the file in in the include_net field and merge the protobuf together.
# network.prototxt
layers {
layer {
name: "data"
type: "data"
meanfile: "/home/jiayq/ilsvrc2012_mean.binaryproto"
}
top: "data"
top: "label"
}
layers {
layer {
name: "conv1"
type: "conv"
num_output: 96
kernelsize: 11
stride: 4
}
bottom: "data"
top: "conv1"
}
layers {
layer {
name: "relu1"
type: "relu"
}
bottom: "conv1"
top: "conv1"
}
layers {
layer {
name: "pool1"
type: "pool"
pool: MAX
kernelsize: 3
stride: 2
}
bottom: "conv1"
top: "pool1"
}
layers {
layer {
name: "norm1"
type: "lrn"
local_size: 5
alpha: 0.0001
beta: 0.75
}
bottom: "pool1"
top: "norm1"
}
...
layers {
layer {
name: "fc8"
type: "innerproduct"
num_output: 1000
}
bottom: "fc7"
top: "fc8"
}
Then define the network_train.prototxt as:
# network_train.prototxt
include_net: "network.prototxt"
layers {
layer {
name: "data"
source: "/home/jiayq/caffe-train-leveldb/"
batchsize: 256
cropsize: 227
mirror: true
}
}
layers {
layer {
name: "conv1"
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0.
}
blobs_lr: 1.
blobs_lr: 2.
weight_decay: 1.
weight_decay: 0.
}
}
layers {
layer {
name: "conv2"
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 1.
}
blobs_lr: 1.
blobs_lr: 2.
weight_decay: 1.
weight_decay: 0.
}
}
...
layers {
layer {
name: "loss"
type: "softmax_loss"
}
bottom: "fc8"
bottom: "label"
}
And define the network_test.prototxt as:
# network_test.prototxt
include_net: "network.prototxt"
layers {
layer {
name: "data"
source: "/home/jiayq/caffe-val-leveldb/"
batchsize: 50
cropsize: 227
mirror: false
}
layers {
layer {
name: "prob"
type: "softmax"
}
bottom: "fc8"
top: "prob"
}
layers {
layer {
name: "accuracy"
type: "accuracy"
}
bottom: "prob"
bottom: "label"
top: "accuracy"
}
Additionally we could define a default layer, that contains a set of default values for a certain kind of layers, for instance it could be used to set the blobslr and weight_decay.
In this case a more compact definition of network_train.prototxt will be:
# network_train.prototxt
include_net: "network.prototxt"
layers {
layer {
name: "data"
source: "/home/jiayq/caffe-train-leveldb/"
batchsize: 256
cropsize: 227
mirror: true
}
}
layers {
layer {
name: "default"
type: "conv"
weight_filler {
type: "gaussian"
std: 0.01
}
bias_filler {
type: "constant"
value: 0.
}
blobs_lr: 1.
blobs_lr: 2.
weight_decay: 1.
weight_decay: 0.
}
}
layers {
layer {
name: "conv2"
bias_filler {
value: 1.
}
}
}
layers {
layer {
name: "conv4"
bias_filler {
value: 1.
}
}
}
layers {
layer {
name: "conv5"
bias_filler {
value: 1.
}
}
}
layers {
layer {
name: "fc6"
weight_filler {
std: 0.005
}
bias_filler {
value: 1.
}
}
}
layers {
layer {
name: "loss"
type: "softmax_loss"
}
bottom: "fc8"
bottom: "label"
}
from caffe.
To me, part of the point of consolidation is to have a single definition file as in @jeffdonahue 's proposal. I want as little redundancy as can be. Including seems like more difficult logic with protobuf too. I am going to hack on a single file def with phase
.
from caffe.
@shelhamer you could still put what I said in one file, where there is one part that define the architecture, another that define the things specific for the training phase and another that define the things specific for the test phase.
I'm looking forward to see your proposal.
from caffe.
We should probably use [packed=true] for all repeated fields with basic types
https://developers.google.com/protocol-buffers/docs/encoding#optional
There is actually a way to import other proto definitions. We may should consider this.
https://developers.google.com/protocol-buffers/docs/proto#other
Also there is a simple way to merge messages, that we could use to merge partial definitions of networks
https://developers.google.com/protocol-buffers/docs/encoding#optional
from caffe.
Even though the definitions are consolidated, in the current implementation there are two networks initialized in the code, one used for training, and one for testing, the test net copies parameter from the train net.
I think they should be consolidated as well, with the help of a split layer, both softmax_loss and accuracy_layer can be put in the same model. In this case, the phase parameter is not used for model initialization, but used in the forward and backward functions to decide whether calculation is needed. This will save the extra memory used during test phase.
from caffe.
@mavenlin, if your proposal is implemented, it will also speed up Solver::Test() by eliminating the time of memory copy. Why don't you create an issue?
CHECK_NOTNULL(test_net_.get())->CopyTrainedLayersFrom(net_param);
from caffe.
Now that the amazing SplitLayer #129 has been merged into the dev branch, is there anyone working on a solution based on it?
from caffe.
To be resolved by #734.
from caffe.
Related Issues (20)
- caffe time -model -weights -gpu=0
- BUG: error happens while building the project using cmake, if without preinstall `gflags`. HOT 1
- Makefile
- import error: segment fault when import caffe
- Segmentation fault (core dumped) when creating imageset
- MSBuild Error
- DeleteMe
- Glib 3.4.30 not found HOT 1
- Error MSB6006: "cmd.exe" exited with code -1073741 515 HOT 2
- blob.hpp dimension check code problem
- Is it possible to use OpenCL on FreeBSD without using ROCm?
- How to build Caffe(OpenCL) on Linux from source code? HOT 1
- Caffe(OpenCL) Error: ordered comparison between pointer and zero ('int32_t *' (aka 'int *') and 'int') HOT 1
- Failed inference with nyud-fcn32s-hha
- ю
- caffe installation HOT 1
- Assessment of the difficulty in porting CPU architecture for caffe
- How to add new layer to caffe like HardSigmoid or Resize HOT 1
- module 'caffe' has no attribute 'set_mode_cpu'
- `GLOG_LIBRARYRARY_DIRS` appears to be in error HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from caffe.