Code Monkey home page Code Monkey logo

Comments (13)

kloudkl avatar kloudkl commented on May 4, 2024

In principle, it is. Cuda-convnet uses one config file for each model.

diff imagenet.prototxt imagenet_deploy.prototxt
< name: "CaffeNet"
< layers {
<   layer {
<     name: "data"
<     type: "data"
<     source: "/home/jiayq/Data/ILSVRC12/train-leveldb"
<     meanfile: "/home/jiayq/Data/ILSVRC12/image_mean.binaryproto"
<     batchsize: 256
<     cropsize: 227
<     mirror: true
<   }
<   top: "data"
<   top: "label"
< }
---
> input: "data"
> input_dim: 10
> input_dim: 3
> input_dim: 227
> input_dim: 227
359,360c350,351
<     name: "loss"
<     type: "softmax_loss"
---
>     name: "prob"
>     type: "softmax"
363c354
<   bottom: "label"
---
>   top: "prob"

softmax_loss may need to be split into softmax plus independent loss function.

diff imagenet.prototxt imagenet_val.prototxt 
6c6
<     source: "/home/jiayq/Data/ILSVRC12/train-leveldb"
---
>     source: "/home/jiayq/Data/ILSVRC12/val-leveldb"
8c8
<     batchsize: 256
---
>     batchsize: 50
10c10
<     mirror: true
---
>     mirror: false
22,33d21
<     weight_filler {
<       type: "gaussian"
<       std: 0.01
<     }
<     bias_filler {
<       type: "constant"
<       value: 0.
<     }
<     blobs_lr: 1.
<     blobs_lr: 2.
<     weight_decay: 1.
<     weight_decay: 0.
84,95d71
<     weight_filler {
<       type: "gaussian"
<       std: 0.01
<     }
<     bias_filler {
<       type: "constant"
<       value: 1.
<     }
<     blobs_lr: 1.
<     blobs_lr: 2.
<     weight_decay: 1.
<     weight_decay: 0.
145,156d120
<     weight_filler {
<       type: "gaussian"
<       std: 0.01
<     }
<     bias_filler {
<       type: "constant"
<       value: 0.
<     }
<     blobs_lr: 1.
<     blobs_lr: 2.
<     weight_decay: 1.
<     weight_decay: 0.
185,196d148
<     weight_filler {
<       type: "gaussian"
<       std: 0.01
<     }
<     bias_filler {
<       type: "constant"
<       value: 1.
<     }
<     blobs_lr: 1.
<     blobs_lr: 2.
<     weight_decay: 1.
<     weight_decay: 0.
225,236d176
<     weight_filler {
<       type: "gaussian"
<       std: 0.01
<     }
<     bias_filler {
<       type: "constant"
<       value: 1.
<     }
<     blobs_lr: 1.
<     blobs_lr: 2.
<     weight_decay: 1.
<     weight_decay: 0.
265,276d204
<     weight_filler {
<       type: "gaussian"
<       std: 0.005
<     }
<     bias_filler {
<       type: "constant"
<       value: 1.
<     }
<     blobs_lr: 1.
<     blobs_lr: 2.
<     weight_decay: 1.
<     weight_decay: 0.
303,314d230
<     weight_filler {
<       type: "gaussian"
<       std: 0.005
<     }
<     bias_filler {
<       type: "constant"
<       value: 1.
<     }
<     blobs_lr: 1.
<     blobs_lr: 2.
<     weight_decay: 1.
<     weight_decay: 0.
341,352d256
<     weight_filler {
<       type: "gaussian"
<       std: 0.01
<     }
<     bias_filler {
<       type: "constant"
<       value: 0
<     }
<     blobs_lr: 1.
<     blobs_lr: 2.
<     weight_decay: 1.
<     weight_decay: 0.
359,360c263,264
<     name: "loss"
<     type: "softmax_loss"
---
>     name: "prob"
>     type: "softmax"
362a267,274
>   top: "prob"
> }
> layers {
>   layer {
>     name: "accuracy"
>     type: "accuracy"
>   }
>   bottom: "prob"
363a276
>   top: "accuracy"

imagenet_val.prototxt has the same layer as the imagenet.prototxt but does not use the optimization parameters blobs_lr, weight_decay, weight_filler, bias_filler. It is ok for test to ignore these fields.

The fields source, batchsize, mirror have conflict values. Just add a prefix train_ or test_ before each of them.

The softmax_loss vs softmax issue has appeared in the output of "diff imagenet.prototxt imagenet_deploy.prototxt".
val.prototxt has an extra accuracy layer. It is necessary to add a field to indicate specific layers are only used in one or two of the training, testing and deployment stages.

from caffe.

sguada avatar sguada commented on May 4, 2024

Although it will be possible to define one prototxt for both, I think the current separation allows more flexibility, and it is easy for the code to read and process them the network definitions. Although a network verification (making sure training and test are compatible) would be nice. If there were a join prototxt, then the code will need to interpret it differently for each case. For instance the deploy cases will became more difficult to handle.

@Yangqing what do you think about this?

from caffe.

Yangqing avatar Yangqing commented on May 4, 2024

I actually like the consolidation idea - having a way to consolidate
multiple protobuf files would allow us to reduce redundancy, since many of
them are actually pretty similar. This being said, I don't have a good idea
on how to do this in the scope of protobuf (they do not allow e.g. #include
type thing). Any suggestions are welcome.

Yangqing

On Sun, Jan 26, 2014 at 10:24 PM, Sergio Guadarrama <
[email protected]> wrote:

Although it will be possible to define one prototxt for both, I think the
current separation allows more flexibility, and it is easy for the code to
read and process them the network definitions. Although a network
verification (making sure training and test are compatible) would be nice.
If there were a join prototxt, then the code will need to interpret it
differently for each case. For instance the deploy cases will became more
difficult to handle.

@Yangqing https://github.com/Yangqing what do you think about this?


Reply to this email directly or view it on GitHubhttps://github.com//issues/57#issuecomment-33344460
.

from caffe.

jeffdonahue avatar jeffdonahue commented on May 4, 2024

One idea could be to add into either the LayerConnection or LayerParameter proto (not sure which would be more natural) a field "repeated string phase" (or maybe enum). If empty, the layer is used in all phases; if specified, the layer is ignored for all phases except the specified one. Then in imagenet.prototxt we specify, for example, two data layers with different phases:

layers {
  layer {
    name: "data"
    type: "data"
    source: "/home/jiayq/Data/ILSVRC12/train-leveldb"
    meanfile: "/home/jiayq/Data/ILSVRC12/image_mean.binaryproto"
    batchsize: 256
    cropsize: 227
    mirror: true
  }
  top: "data"
  top: "label"
  phase: "train"
}
layers {
  layer {
    name: "data"
    type: "data"
    source: "/home/jiayq/Data/ILSVRC12/val-leveldb"
    meanfile: "/home/jiayq/Data/ILSVRC12/image_mean.binaryproto"
    batchsize: 50
    cropsize: 227
    mirror: false
  }
  top: "data"
  top: "label"
  phase: "val"
}

...

layers {
  layer {
    name: "loss"
    type: "softmax_loss"
  }
  bottom: "fc8"
  bottom: "label"
  phase: "train"
}
layers {
  layer {
    name: "prob"
    type: "softmax"
  }
  bottom: "fc8"
  top: "prob"
  phase: "val"
  phase: "deploy"
}
layers {
  layer {
    name: "accuracy"
    type: "accuracy"
  }
  bottom: "prob"
  bottom: "label"
  top: "accuracy"
  phase: "val"
}

from caffe.

shelhamer avatar shelhamer commented on May 4, 2024

I like @jeffdonahue's proposal a lot. It's concise and the meaning is clear and I don't think it would complicate net construction much. I could try for a PR next week.

from caffe.

sguada avatar sguada commented on May 4, 2024

Another possibility will be to separate the network architecture from the training and and testing parameters/layers and then do an explicit include and merge, which will read the file in in the include_net field and merge the protobuf together.

# network.prototxt
layers {
  layer {
    name: "data"
    type: "data"
    meanfile: "/home/jiayq/ilsvrc2012_mean.binaryproto"
  }
  top: "data"
  top: "label"
}
layers {
  layer {
    name: "conv1"
    type: "conv"
    num_output: 96
    kernelsize: 11
    stride: 4
  }
  bottom: "data"
  top: "conv1"
}
layers {
  layer {
    name: "relu1"
    type: "relu"
  }
  bottom: "conv1"
  top: "conv1"
}
layers {
  layer {
    name: "pool1"
    type: "pool"
    pool: MAX
    kernelsize: 3
    stride: 2
  }
  bottom: "conv1"
  top: "pool1"
}
layers {
  layer {
    name: "norm1"
    type: "lrn"
    local_size: 5
    alpha: 0.0001
    beta: 0.75
  }
  bottom: "pool1"
  top: "norm1"
}
...
layers {
  layer {
    name: "fc8"
    type: "innerproduct"
    num_output: 1000
  }
  bottom: "fc7"
  top: "fc8"
}

Then define the network_train.prototxt as:

# network_train.prototxt
include_net: "network.prototxt"
layers {
  layer {
    name: "data"
    source: "/home/jiayq/caffe-train-leveldb/"
    batchsize: 256
    cropsize: 227
    mirror: true
  }
}
layers {
  layer {
    name: "conv1"
    weight_filler {
      type: "gaussian"
      std: 0.01
    }
    bias_filler {
      type: "constant"
      value: 0.
    }
    blobs_lr: 1.
    blobs_lr: 2.
    weight_decay: 1.
    weight_decay: 0.
  }
}
layers {
  layer {
    name: "conv2"
    weight_filler {
      type: "gaussian"
      std: 0.01
    }
    bias_filler {
      type: "constant"
      value: 1.
    }
    blobs_lr: 1.
    blobs_lr: 2.
    weight_decay: 1.
    weight_decay: 0.
  }
 }
...
layers {
  layer {
    name: "loss"
    type: "softmax_loss"
  }
  bottom: "fc8"
  bottom: "label"
}

And define the network_test.prototxt as:

# network_test.prototxt
include_net: "network.prototxt"
layers {
  layer {
    name: "data"
    source: "/home/jiayq/caffe-val-leveldb/"
    batchsize: 50
    cropsize: 227
    mirror: false
  }
layers {
  layer {
    name: "prob"
    type: "softmax"
  }
  bottom: "fc8"
  top: "prob"
}
layers {
  layer {
    name: "accuracy"
    type: "accuracy"
  }
  bottom: "prob"
  bottom: "label"
  top: "accuracy"
}

Additionally we could define a default layer, that contains a set of default values for a certain kind of layers, for instance it could be used to set the blobslr and weight_decay.

In this case a more compact definition of network_train.prototxt will be:

# network_train.prototxt
include_net: "network.prototxt"
layers {
  layer {
    name: "data"
    source: "/home/jiayq/caffe-train-leveldb/"
    batchsize: 256
    cropsize: 227
    mirror: true
  }
}
layers {
  layer {
    name: "default"
    type: "conv"
    weight_filler {
      type: "gaussian"
      std: 0.01
    }
    bias_filler {
      type: "constant"
      value: 0.
    }
    blobs_lr: 1.
    blobs_lr: 2.
    weight_decay: 1.
    weight_decay: 0.
  }
}
layers {
  layer {
    name: "conv2"
    bias_filler {
       value: 1.
    }
  }
}
layers {
  layer {
    name: "conv4"
    bias_filler {
       value: 1.
    }
  }
}
layers {
  layer {
    name: "conv5"
    bias_filler {
       value: 1.
    }
  }
}
layers {
  layer {
    name: "fc6"
    weight_filler {
      std: 0.005
    }
    bias_filler {
       value: 1.
    }
  }
}
layers {
  layer {
    name: "loss"
    type: "softmax_loss"
  }
  bottom: "fc8"
  bottom: "label"
}

from caffe.

shelhamer avatar shelhamer commented on May 4, 2024

To me, part of the point of consolidation is to have a single definition file as in @jeffdonahue 's proposal. I want as little redundancy as can be. Including seems like more difficult logic with protobuf too. I am going to hack on a single file def with phase.

from caffe.

sguada avatar sguada commented on May 4, 2024

@shelhamer you could still put what I said in one file, where there is one part that define the architecture, another that define the things specific for the training phase and another that define the things specific for the test phase.

I'm looking forward to see your proposal.

from caffe.

sguada avatar sguada commented on May 4, 2024

We should probably use [packed=true] for all repeated fields with basic types
https://developers.google.com/protocol-buffers/docs/encoding#optional

There is actually a way to import other proto definitions. We may should consider this.
https://developers.google.com/protocol-buffers/docs/proto#other

Also there is a simple way to merge messages, that we could use to merge partial definitions of networks
https://developers.google.com/protocol-buffers/docs/encoding#optional

from caffe.

mavenlin avatar mavenlin commented on May 4, 2024

Even though the definitions are consolidated, in the current implementation there are two networks initialized in the code, one used for training, and one for testing, the test net copies parameter from the train net.

I think they should be consolidated as well, with the help of a split layer, both softmax_loss and accuracy_layer can be put in the same model. In this case, the phase parameter is not used for model initialization, but used in the forward and backward functions to decide whether calculation is needed. This will save the extra memory used during test phase.

from caffe.

kloudkl avatar kloudkl commented on May 4, 2024

@mavenlin, if your proposal is implemented, it will also speed up Solver::Test() by eliminating the time of memory copy. Why don't you create an issue?

  CHECK_NOTNULL(test_net_.get())->CopyTrainedLayersFrom(net_param);

from caffe.

kloudkl avatar kloudkl commented on May 4, 2024

Now that the amazing SplitLayer #129 has been merged into the dev branch, is there anyone working on a solution based on it?

from caffe.

shelhamer avatar shelhamer commented on May 4, 2024

To be resolved by #734.

from caffe.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.