Code Monkey home page Code Monkey logo

multipathnet's Introduction

MultiPath Network training code

The code provides functionality to train Fast R-CNN and MultiPath Networks in Torch-7.
Corresponding paper: A MultiPath Network for Object Detection http://arxiv.org/abs/1604.02135

sheep

If you use MultiPathNet in your research, please cite the relevant papers:

@INPROCEEDINGS{Zagoruyko2016Multipath,
    author = {S. Zagoruyko and A. Lerer and T.-Y. Lin and P. O. Pinheiro and S. Gross and S. Chintala and P. Doll{\'{a}}r},
    title = {A MultiPath Network for Object Detection},
    booktitle = {BMVC}
    year = {2016}
}

Requirements

  • Linux
  • NVIDIA GPU with compute capability 3.5+

Installation

The code depends on Torch-7, fb.python and several other easy-to-install torch packages.
To install Torch, follow http://torch.ch/docs/getting-started.html
Then install additional packages:

luarocks install inn
luarocks install torchnet
luarocks install fbpython
luarocks install class

Evaluation relies on COCO API calls via python interface, because lua interface doesn't support it. Lua API is used to load annotation files in *json to COCO API data structures. This doesn't work for proposal files as they're too big, so we provide converted proposals for sharpmask and selective search in torch format.

First, clone https://github.com/pdollar/coco:

git clone https://github.com/pdollar/coco

Then install LuaAPI:

cd coco
luarocks make LuaAPI/rocks/coco-scm-1.rockspec

And PythonAPI:

cd coco/PythonAPI
make

You might need to install Cython for this:

sudo apt-get install python-pip
sudo pip install Cython

You will have to add the path to PythonAPI to PYTHONPATH. Note that this won't work with anaconda as it ships with it's own libraries which conflict with torch.

EC2 installation script

Thanks to @DeegC there is scripts/ec2-install.sh script for quick EC2 setup.

Data preparation

The root folder should have a folder data with the following subfolders:

models/
annotations/
proposals/

models folder should contain AlexNet and VGG pretrained imagenet files downloaded from here. ResNets can resident in other places specified by resnet_path env variable.

annotations should contain *json files downloaded from http://mscoco.org/external. There are *json annotation files for PASCAL VOC, MSCOCO, ImageNet and other datasets.

proposals should contain *t7 files downloaded from here We provide selective search VOC 2007 and VOC 2012 proposals converted from https://github.com/rbgirshick/fast-rcnn and SharpMask proposals for COCO 2015 converted from https://github.com/facebookresearch/deepmask, which can be used to compute proposals for new images as well.

Here is an example structure:

data
|-- annotations
|   |-- instances_train2014.json
|   |-- instances_val2014.json
|   |-- pascal_test2007.json
|   |-- pascal_train2007.json
|   |-- pascal_train2012.json
|   |-- pascal_val2007.json
|   `-- pascal_val2012.json
|-- models
|   |-- caffenet_fast_rcnn_iter_40000.t7
|   |-- imagenet_pretrained_alexnet.t7
|   |-- imagenet_pretrained_vgg.t7
|   `-- vgg16_fast_rcnn_iter_40000.t7
`-- proposals
    |-- VOC2007
    |   `-- selective_search
    |       |-- test.t7
    |       |-- train.t7
    |       |-- trainval.t7
    |       `-- val.t7
    `-- coco
        `-- sharpmask
            |-- train.t7
            `-- val.t7

Download selective_search proposals for VOC2007:

wget https://dl.fbaipublicfiles.com/multipathnet/proposals/VOC2007/selective_search/train.t7
wget https://dl.fbaipublicfiles.com/multipathnet/proposals/VOC2007/selective_search/val.t7
wget https://dl.fbaipublicfiles.com/multipathnet/proposals/VOC2007/selective_search/trainval.t7
wget https://dl.fbaipublicfiles.com/multipathnet/proposals/VOC2007/selective_search/test.t7

Download sharpmask proposals for COCO:

wget https://dl.fbaipublicfiles.com/multipathnet/proposals/coco/sharpmask/train.t7
wget https://dl.fbaipublicfiles.com/multipathnet/proposals/coco/sharpmask/val.t7

As for the images themselves, provide paths to VOCDevkit and COCO in config.lua

Running DeepMask with MultiPathNet on provided image

We provide an example of how to extract DeepMask or SharpMask proposals from an image and run recognition MultiPathNet to classify them, then do non-maximum suppression and draw the found objects.

  1. Clone DeepMask project into the root directory:
git clone https://github.com/facebookresearch/deepmask
  1. Download DeepMask or SharpMask network:
cd data/models
# download SharpMask based on ResNet-50
wget https://dl.fbaipublicfiles.com/deepmask/models/sharpmask/model.t7 -O sharpmask.t7
  1. Download recognition network:
cd data/models
# download ResNet-18-based model trained on COCO with integral loss
wget https://dl.fbaipublicfiles.com/multipathnet/models/resnet18_integral_coco.t7
  1. Make sure you have COCO validation .json files in data/annotations/instances_val2014.json

  2. Pick some image and run the script:

th demo.lua -img ./deepmask/data/testImage.jpg

And you should see this image:

iterm2 4jpuod lua_khbaaq

See file demo.lua for details.

Training

The repository supports training Fast-RCNN and MultiPath networks with data and model multi-GPU paralellism. Supported base models are the following:

PASCAL VOC

To train Fast-RCNN on VOC2007 trainval with VGG base model and selective search proposals do:

test_nsamples=1000 model=vgg ./scripts/train_fastrcnn_voc2007.sh

The resulting mAP is slightly (~2 mAP) higher than original Fast-RCNN number. We should mention that the code is not exactly the same as we improved ROIPooling by fixing a few bugs, see szagoruyko/imagine-nn#17

COCO

To train MultiPathNet with VGG-16 base model on 4 GPUs run:

train_nGPU=4 test_nGPU=1 ./scripts/train_multipathnet_coco.sh

Here is a graph visualization of the network (click to enlarge):

multipathnet

To train ResNet-18 on COCO do:

train_nGPU=4 test_nGPU=1 model=resnet resnet_path=./data/models/resnet/resnet-18.t7 ./scripts/train_coco.sh

Evaluation

PASCAL VOC

We provide original models from Fast-RCNN paper converted to torch format here:

To evaluate these models run:

model=data/models/caffenet_fast_rcnn_iter_40000.t7 ./scripts/eval_fastrcnn_voc2007.sh
model=data/models/vgg_fast_rcnn_iter_40000.t7 ./scripts/eval_fastrcnn_voc2007.sh

COCO

Evaluate fast ResNet-18-based network trained with integral loss on COCO val5k split (resnet18_integral_coco.t7 89MB):

test_nGPU=4 test_nsamples=5000 ./scripts/eval_coco.sh

It achieves 24.4 mAP using 400 SharpMask proposals per image:

Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.244
Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.402
Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.268
Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.078
Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.266
Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.394
Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.249
Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.368
Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.377
Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.135
Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.444
Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.561

multipathnet's People

Contributors

adamlerer avatar deegc avatar facebook-github-bot avatar szagoruyko avatar wanyenlo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

multipathnet's Issues

install fbpython errror

when i run "luarocks install fbpython",the error happened as follows:

Found Boost components:
thread
-- Found PythonInterp: /search/odin/adur/zhangli/usr/local/python27/bin/python2.7 (found suitable version "2.7.12", minimum required is "2.7")
CMake Error at /usr/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:108 (message):
Could NOT find PythonLibs (missing: PYTHON_LIBRARIES PYTHON_INCLUDE_DIRS)
(Required is at least version "2.7.12")
Call Stack (most recent call first):
/usr/share/cmake/Modules/FindPackageHandleStandardArgs.cmake:315 (_FPHSA_FAILURE_MESSAGE)
/usr/share/cmake/Modules/FindPythonLibs.cmake:198 (FIND_PACKAGE_HANDLE_STANDARD_ARGS)
CMakeLists.txt:47 (FIND_PACKAGE)

-- Configuring incomplete, errors occurred!
See also "/tmp/luarocks_fbpython-0.1-2-2147/fblualib/fblualib/python/build/CMakeFiles/CMakeOutput.log".

could anyone help me?~thanks

pre-trained models "sharpmask.t7" and "resnet18_integral_coco.t7" for arm platform

I want to run multipathnet on NVIDIA tx1(ARM architecture), but i seems the model "sharpmask.t7" is trained for X86 platform, sharpmask.t7 can't be loaded. Are there pre-trained model "sharpmask.t7"
and "resnet18_integral_coco.t7" for ARM platform?

ubuntu@tegra-ubuntu:/mnt/sdcard/multipathnet$ th demo.lua
Warning: Failed to load function from bytecode: (binary): cannot load incompatible bytecodeWarning: Failed to load function from bytecode: [string "..."]:2: unexpected symbol/home/ubuntu/torch/install/bin/luajit: /home/ubuntu/torch/install/share/lua/5.1/torch/File.lua:375: unknown object
stack traceback:
[C]: in function 'error'
/home/ubuntu/torch/install/share/lua/5.1/torch/File.lua:375: in function 'readObject'
/home/ubuntu/torch/install/share/lua/5.1/torch/File.lua:307: in function 'readObject'
/home/ubuntu/torch/install/share/lua/5.1/torch/File.lua:369: in function 'readObject'
/home/ubuntu/torch/install/share/lua/5.1/nn/Module.lua:192: in function 'read'
/home/ubuntu/torch/install/share/lua/5.1/torch/File.lua:351: in function 'readObject'
/home/ubuntu/torch/install/share/lua/5.1/torch/File.lua:369: in function 'readObject'
/home/ubuntu/torch/install/share/lua/5.1/torch/File.lua:369: in function 'readObject'
/home/ubuntu/torch/install/share/lua/5.1/nn/Module.lua:192: in function 'read'
/home/ubuntu/torch/install/share/lua/5.1/torch/File.lua:351: in function 'readObject'
/home/ubuntu/torch/install/share/lua/5.1/torch/File.lua:369: in function 'readObject'
/home/ubuntu/torch/install/share/lua/5.1/nn/Module.lua:192: in function 'read'
/home/ubuntu/torch/install/share/lua/5.1/torch/File.lua:351: in function 'readObject'
/home/ubuntu/torch/install/share/lua/5.1/torch/File.lua:369: in function 'readObject'
/home/ubuntu/torch/install/share/lua/5.1/torch/File.lua:409: in function 'load'
demo.lua:36: in main chunk
[C]: in function 'dofile'
...untu/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x004061f0

Error parsing file names in loader.lua

Residual issue from #33

On Ubuntu 14.04, Torch with Lua 5.1.

train_nGPU=2 test_nGPU=1 ./scripts/train_multipathnet_coco.sh 
{
  phase2_learningRate : -1
  resume : ""
  weightDecay : 0
  learningRate : 0.001
  step : 2800
  bg_threshold_min : 0.1
  train_set : "trainval"
  train_min_gtroi_size : 0
  test_nsamples : 1000
  train_nGPU : 2
  learningRateDecay : 0
  test_num_per_image : 100
  test_nGPU : 1
  epoch : 1
  disable_memory_efficient_forward : false
  batchSize : 64
  extra_proposals_file : ""
  year : "2014"
  model : "multipathnet"
  dampening : 0
  nEpochs : 3200
  train_remove_dropouts : false
  manualSeed : 555
  imagenet_classes : ""
  retrain : "no"
  save_folder : "logs/coco_multipathnet_sharpmask_328225696"
  criterion : "ce"
  scale : 800
  bbox_regression : 1
  images_per_batch : 4
  nDonkeys : 6
  decay : 0.1
  dataset : "coco"
  best_proposals_number : 1000
  retrain_mean_std : ""
  epochSize : 100
  phase2_step : -1
  train_nsamples : -1
  fg_threshold : 0.5
  bg_threshold_max : 0.5
  phase2_decay : -1
  method : "sgd"
  test_set : "val"
  test_best_proposals_number : 400
  snapshot : 100
  sample_n_per_box : 0
  proposal_dir : "data/proposals/"
  integral : true
  proposals : "sharpmask"
  train_min_proposal_size : 0
  checkpoint : false
  sample_sigma : 1
  phase2_epoch : -1
  max_size : 1000
  momentum : 0.9
}
not found: THNN_CudaHalfELU_updateOutput...me/alexander/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfELU_updateGradInput...me/alexander/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfHardTanh_updateOutput...me/alexander/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfHardTanh_updateGradInput...me/alexander/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfLeakyReLU_updateOutput...me/alexander/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfLeakyReLU_updateGradInput...me/alexander/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfLookupTable_accGradParameters...me/alexander/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfLookupTable_renorm...me/alexander/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfMarginCriterion_updateOutput...me/alexander/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfMarginCriterion_updateGradInput...me/alexander/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfMultiMarginCriterion_updateOutput...me/alexander/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfMultiMarginCriterion_updateGradInput...me/alexander/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfPReLU_accGradParameters...me/alexander/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfSpatialConvolutionLocal_accGradParameters...me/alexander/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfSpatialConvolutionMM_accGradParameters...me/alexander/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfSpatialCrossMapLRN_updateOutput...me/alexander/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfSpatialCrossMapLRN_updateGradInput...me/alexander/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfSpatialDilatedConvolution_accGradParameters...me/alexander/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfSpatialFullConvolution_accGradParameters...me/alexander/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfSoftPlus_updateOutput...me/alexander/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfSoftPlus_updateGradInput...me/alexander/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfSoftShrink_updateOutput...me/alexander/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfSoftShrink_updateGradInput...me/alexander/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfSqrt_updateOutput...me/alexander/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfTemporalConvolution_accGradParameters...me/alexander/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfThreshold_updateOutput...me/alexander/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfThreshold_updateGradInput...me/alexander/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfVolumetricConvolution_accGradParameters...me/alexander/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfVolumetricDilatedConvolution_accGradParameters...me/alexander/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfVolumetricFullConvolution_accGradParameters...me/alexander/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
model_opt	
{
  model_foveal_exclude : -1
  model_conv345_norm : true
  model_het : true
}
Warning: Failed to load function from bytecode: binary string: bad header in precompiled chunkWarning: Failed to load function from bytecode: binary string: bad header in precompiled chunkdev	1	1	
dev	2	2	
dev	3	1	
dev	4	2	
nn.Sequential {
  [architecture cut for brevity]
}
convert: ./data/annotations/instances_train2014.json --> .t7 [please be patient]	
converting: categories	
converting: annotations	
converting: images	
convert: building indices	
convert: complete [33.86 s]	
Loading proposals at 	{
  1 : "/home/alexander/multipathnet/data/proposals/coco/sharpmask/train.t7"
  2 : "/home/alexander/multipathnet/data/proposals/coco/sharpmask/val.t7"
}
Done loading proposals	
# proposal images	123287	
# dataset images	118287	
# images	123287	
nImages	118287	
Loading proposals at 	{
  1 : "/home/alexander/multipathnet/data/proposals/coco/sharpmask/train.t7"
  2 : "/home/alexander/multipathnet/data/proposals/coco/sharpmask/val.t7"
}
Done loading proposals	
# proposal images	123287	
# dataset images	118287	
# images	123287	
nImages	118287	
/home/alexander/torch/install/bin/lua: ...alexander/torch/install/share/lua/5.1/trepl/init.lua:389: ./loaders/loader.lua:67: expected cdata for arg #1
stack traceback:
	[C]: in function 'error'
	...alexander/torch/install/share/lua/5.1/trepl/init.lua:389: in function 'require'
	train.lua:121: in main chunk
	[C]: in function 'dofile'
	.../torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
	[C]: ?

Crashes on file_name = ffi.string(self.images.file_name[idx]), in loader.lua.

Could this be related in some way to it loading the same proposal data twice in a row?

Since the data in self.images.file_name[idx] appears to not be cdata -- I added a print statement to the start of the function and got a singular COCO_train2014_000000057870.jpg for each time it loaded the proposal data before failure -- I tried assigning it directly to file_name without the conversion. This saw the function getting past one image (does several thousand), but ran into the below error (I have torchnet).

[ same as above ]
[ model architecture ends here ]
}
Loading proposals at 	{
  1 : "/home/alexander/multipathnet/data/proposals/coco/sharpmask/train.t7"
  2 : "/home/alexander/multipathnet/data/proposals/coco/sharpmask/val.t7"
}
Done loading proposals	
# proposal images	123287	
# dataset images	118287	
# images	123287	
nImages	118287	
FATAL THREAD PANIC: (read) ...alexander/torch/install/share/lua/5.1/torch/File.lua:343: unknown Torch class <package.torchnet>

Installation Issues

Hi,

I am having trouble installing some of the dependencies for multipathnet. For now, when installing the fbpython package for torch (with luarocks install fbpython), I am getting the following error

CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
THPP_INCLUDE_DIR
   used as include directory in directory /tmp/luarocks_fbpython-0.1-2-9921/fblualib/fblualib/python
THPP_LIBRARY
    linked by target "lua_module" in directory /tmp/luarocks_fbpython-0.1-2-9921/fblualib/fblualib/python

-- Configuring incomplete, errors occurred!
See also "/tmp/luarocks_fbpython-0.1-2-9921/fblualib/fblualib/python/build/CMakeFiles/CMakeOutput.log".

Error: Build error: Failed building.

Failed to train the multipathnet model: not find THNN_Cuda....

My old Torch can pass the process below, but failed with "not enough memory":
/usr/bin/luajit: /usr/share/lua/5.1/threads/threads.lua:183: [thread 4 callback] not enough memory

So I reinstall an new Torch compiled with lua5.1, but met the following problem.
Who can tell me what happened ? Thanks a lot!

hhteng@zhsw-ws04:~/facebook/multipathnet$ train_nGPU=1 test_nGPU=1 ./scripts/train_multipathnet_coco.sh
{
  phase2_learningRate : -1
  resume : ""
  weightDecay : 0
  learningRate : 0.001
  step : 2800
  bg_threshold_min : 0.1
  train_set : "trainval"
  train_min_gtroi_size : 0
  test_nsamples : 1000
  train_nGPU : 1
  learningRateDecay : 0
  test_num_per_image : 100
  test_nGPU : 1
  epoch : 1
  disable_memory_efficient_forward : false
  batchSize : 64
  extra_proposals_file : ""
  year : "2014"
  model : "multipathnet"
  dampening : 0
  nEpochs : 3200
  train_remove_dropouts : false
  manualSeed : 555
  imagenet_classes : ""
  retrain : "no"
  save_folder : "logs/coco_multipathnet_sharpmask_377121556"
  criterion : "ce"
  scale : 800
  bbox_regression : 1
  images_per_batch : 4
  nDonkeys : 6
  decay : 0.1
  dataset : "coco"
  best_proposals_number : 1000
  retrain_mean_std : ""
  epochSize : 100
  phase2_step : -1
  train_nsamples : -1
  fg_threshold : 0.5
  bg_threshold_max : 0.5
  phase2_decay : -1
  method : "sgd"
  test_set : "val"
  test_best_proposals_number : 400
  snapshot : 100
  sample_n_per_box : 0
  proposal_dir : "data/proposals/"
  integral : true
  proposals : "sharpmask"
  train_min_proposal_size : 0
  checkpoint : false
  sample_sigma : 1
  phase2_epoch : -1
  max_size : 1000
  momentum : 0.9
}
cnot found: THNN_CudaHalfELU_updateOutput/home/hhteng/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfELU_updateGradInput/home/hhteng/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfHardTanh_updateOutput/home/hhteng/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfHardTanh_updateGradInput/home/hhteng/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfLeakyReLU_updateOutput/home/hhteng/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfLeakyReLU_updateGradInput/home/hhteng/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfLookupTable_accGradParameters/home/hhteng/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfLookupTable_renorm/home/hhteng/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfMarginCriterion_updateOutput/home/hhteng/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfMarginCriterion_updateGradInput/home/hhteng/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type
not found: THNN_CudaHalfMultiMarginCriterion_updateOutput/home/hhteng/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfMultiMarginCriterion_updateGradInput/home/hhteng/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfPReLU_accGradParameters/home/hhteng/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfSpatialConvolutionLocal_accGradParameters/home/hhteng/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfSpatialConvolutionMM_accGradParameters/home/hhteng/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfSpatialCrossMapLRN_updateOutput/home/hhteng/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type
not found: THNN_CudaHalfSpatialCrossMapLRN_updateGradInput/home/hhteng/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfSpatialDilatedConvolution_accGradParameters/home/hhteng/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfSpatialFullConvolution_accGradParameters/home/hhteng/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfSoftPlus_updateOutput/home/hhteng/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfSoftPlus_updateGradInput/home/hhteng/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfSoftShrink_updateOutput/home/hhteng/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfSoftShrink_updateGradInput/home/hhteng/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfSqrt_updateOutput/home/hhteng/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfTemporalConvolution_accGradParameters/home/hhteng/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfThreshold_updateOutput/home/hhteng/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfThreshold_updateGradInput/home/hhteng/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfVolumetricConvolution_accGradParameters/home/hhteng/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfVolumetricDilatedConvolution_accGradParameters/home/hhteng/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
not found: THNN_CudaHalfVolumetricFullConvolution_accGradParameters/home/hhteng/torch/install/share/lua/5.1/nn/THNN.lua:108: NYI: call arg type	
dmodel_opt	
{
  model_foveal_exclude : -1
  model_conv345_norm : true
  model_het : true
}
Warning: Failed to load function from bytecode: binary string: bad header in precompiled chunkWarning: Failed to load function from bytecode: binary string: bad header in precompiled chunk/home/hhteng/torch/install/bin/lua: ...hhteng/facebook/multipathnet/models/multipathnet.lua:27: attempt to call method 'unpack' (a nil value)
stack traceback:
	...hhteng/facebook/multipathnet/models/multipathnet.lua:27: in main chunk
	[C]: in function 'dofile'
	...me/hhteng/torch/install/share/lua/5.1/paths/init.lua:84: in function 'dofile'
	train.lua:104: in main chunk
	[C]: in function 'dofile'
	.../torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
	[C]: ?'

demo.lua error on some images

Hi,

I just started with multipathnet and I am having issues with some image files. I have tested with a couple images and the test image provided and everything worked fine but some images just give this error:


/home/ubuntu/torch/install/bin/luajit: demo.lua:80: calling 'select' on bad self (empty Tensor at /home/ubuntu/torch/pkg/torch/generic/Tensor.c:380)
stack traceback:
        [C]: in function 'select'
        demo.lua:80: in main chunk
        [C]: in function 'dofile'
        ...untu/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
        [C]: at 0x00406670

Here is the image in question: https://1drv.ms/i/s!Av-Yk52R4YupvKRWWmZUdNd4IAciKg

Any suggestions?

Missing dependencies

Hi,

I am using ec2-install.sh to setup a new g2.2xlarge machine. However, some of the dependencies are not installing correctly. When I try to run eval_coco (as a test), I am getting an error:

no field package.preload['cutorch']

After investigating, running "luarocks install cutorch" fails here:

CMake Error at /usr/share/cmake/Modules/FindCUDA.cmake:539 (message):
Specify CUDA_TOOLKIT_ROOT_DIR

I'm not super familiar with CUDA, so I'm wondering where I specify the cuda path and how to do so.

Thanks.

Version of Lua, Torch // demo issue

Hi everyone,

I tried to run the demo, but I had this issue when the script is loading the sharpmask model.
/usr/local/bin/luajit: /usr/local/share/lua/5.1/torch/File.lua:301: Failed to load function from bytecode: (binary): cannot load incompatible bytecode

If I think it's a probably because I haven't the right version of lua or torch.
Is that possible? If yes, which one I have to install.

Thanks, Guillaume

Saving model crashes when training multipathnet

When executed to the following code on the end epoch of training multipathnet model, the process crashed.

   print("Saving model to "..model_path)
   torch.save(model_path, utils.checkpoint(model))

The stack trace:

/home/samson/torch/install/bin/luajit: ./modules/ModelParallelTable.lua:357: ModelParallelTable only supports CudaTensor, not torch.FloatTensor
stack traceback:
    [C]: in function 'error'
    ./modules/ModelParallelTable.lua:357: in function 'type'
    /home/samson/torch/install/share/lua/5.1/nn/utils.lua:45: in function 'recursiveType'
    /home/samson/torch/install/share/lua/5.1/nn/utils.lua:41: in function 'recursiveType'
    /home/samson/torch/install/share/lua/5.1/nn/Module.lua:126: in function 'float'
    /data/home/samson/Repo/multipathnet/utils.lua:487: in function 'checkpoint'
    train.lua:196: in function 'save'
    train.lua:340: in function 'hooks'
    ./engines/fboptimengine.lua:79: in function 'train'
    train.lua:364: in main chunk
    [C]: in function 'dofile'
    ...mson/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
    [C]: at 0x00405d50

FYI:

torch.save(model_path, model)

is fine.

run demo.lua error in __convert:decode, Expected value but found T_END at character 1

I update the models and annotations to make sure that no files damage. But when th demo.lua, it still said in function:decode occur something wrong.

That's wrong parts:

convert: ./data/annotations/instances_val2014.json --> .t7 [please be patient]	
/home/xavier/torch/install/bin/luajit: /home/xavier/torch/install/share/lua/5.1/coco/CocoApi.lua:142: Expected value but found T_END at character 1
stack traceback:
	[C]: in function 'decode'
	/home/xavier/torch/install/share/lua/5.1/coco/CocoApi.lua:142: in function '__convert'
	/home/xavier/torch/install/share/lua/5.1/coco/CocoApi.lua:128: in function '__init'
	/home/xavier/torch/install/share/lua/5.1/torch/init.lua:91: in function </home/xavier/torch/install/share/lua/5.1/torch/init.lua:87>
	[C]: in function 'CocoApi'
	./loaders/loader.lua:46: in function 'DataLoader'
	/home/xavier/multipathnet/DataSetJSON.lua:44: in function 'create'
	demo.lua:92: in main chunk
	[C]: in function 'dofile'
	...vier/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
	[C]: at 0x00405d50

I need rush help, thank you very much! I will appreciate it.

Swapped test results shown in eval_fastrcnn_voc2007.sh?

Hi all,

Is it possible that the test results of eval_fastrcnn_voc2007.sh shown here are swapped between the two models? I run the script with test_model=data/models/caffenet_fast_rcnn_iter_40000.t7, and get the results reported for the vgg16_fast_rcnn_iter_40000.t7 model:

ubuntu@ip-172-31-24-199:~/multipathnet$ scripts/eval_fastrcnn_voc2007.sh
{
  year : "2007"
  test_nsamples : -1
  test_augment : false
  proposals : "selective_search"
  test_model : "./data/models/caffenet_fast_rcnn_iter_40000.t7"
  max_size : 1000
  test_best_proposals_number : 2000
  test_save_res : ""
  test_add_nosoftmax : false
  test_save_raw : ""
  disable_memory_efficient_forward : false
  test_data_offset : -1
  proposal_dir : "./data/proposals"
  test_save_res_prefix : ""
  test_bbox_voting_nms_threshold : 0.5
  test_num_iterative_loc : 1
  scale : 600
  dataset : "pascal"
  test_min_proposal_size : 2
  test_nGPU : 4
  test_load_aboxes : ""
  test_bbox_voting : false
  test_just_save_boxes : false
  transformer : "RossTransformer"
  test_bbox_voting_score_pow : 1
  test_set : "test"
  test_use_rbox_scores : false
  test_nms_threshold : 0.3
}
[...]
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.264
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.559
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.217
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.034
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.132
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.318
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.304
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.400
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.408
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.140
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.285
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.456

(Unfortunately, I cannot cross-check this with test_model=data/models/vgg16_fast_rcnn_iter_40000.t7, as I get a CUDA out of memory error)

Attempt to concentate a nil value

I just finished the whole stack installation (half via ebuilds, half from sources) on Gentoo system. Thing is that any lua file I try to execute via th something.lua is ending as:

ares multipathnet # th demo.lua 
luajit: /usr/share/lua/5.1/trepl/init.lua:389: /usr/share/lua/5.1/trepl/init.lua:389: /usr/share/lua/5.1/trepl/init.lua:389: /usr/share/lua/5.1/trepl/init.lua:389: /usr/share/lua/5.1/trepl/init.lua:389: /usr/share/lua/5.1/trepl/init.lua:389: /usr/share/lua/5.1/trepl/init.lua:389: /usr/share/lua/5.1/trepl/init.lua:389: /usr/share/lua/5.1/torch/init.lua:54: attempt to concatenate a nil value
stack traceback:
	[C]: in function 'error'
	/usr/share/lua/5.1/trepl/init.lua:389: in function 'require'
	demo.lua:15: in main chunk
	[C]: in function 'dofile'
	/usr/bin/th:150: in main chunk
	[C]: at 0x004045a0
ares multipathnet # th Tester_FRCNN.lua 
luajit: /usr/share/lua/5.1/trepl/init.lua:389: /usr/share/lua/5.1/trepl/init.lua:389: /usr/share/lua/5.1/trepl/init.lua:389: /usr/share/lua/5.1/trepl/init.lua:389: /usr/share/lua/5.1/trepl/init.lua:389: /usr/share/lua/5.1/trepl/init.lua:389: /usr/share/lua/5.1/torch/init.lua:54: attempt to concatenate a nil value
stack traceback:
	[C]: in function 'error'
	/usr/share/lua/5.1/trepl/init.lua:389: in function 'require'
	/mnt/data/proj/neural-networks/multipathnet/utils.lua:11: in main chunk
	[C]: in function 'dofile'
	Tester_FRCNN.lua:9: in main chunk
	[C]: in function 'dofile'
	/usr/bin/th:150: in main chunk
	[C]: at 0x004045a0

I think I am missing something in general, but the error doesn't not lead me to resolution...

Thanks for any hints.

Ladislav

LuaJIT not enough memory

Hello,
I run into the following issue while running the training script for coco.

convert: ./data/annotations/instances_train2014.json --> .t7 [please be patient]
/home/fanyix/torch/install/bin/luajit: not enough memory

It seems to be related to the memory limit of LuaJIT. But not sure if there is any solution to it.
One potential workaround is to build my torch against plain Lua instead of LuaJIT. But then not sure if I can use fbpython anymore, since it seems modules in fblualib is supposed to work with LuaJIT.

Thanks,
Fanyi

Training custom data set

Hi,
How do I train a custom data set ? How to create the bounding box proposals and prepare data for training ?

FATAL THREAD PANIC - while training coco

I'm trying to train with the coco dataset and I run into the following errors. When attempting to train with train_multipathnet_coco.sh, I see this.

train_nGPU=2 test_nGPU=1 ./scripts/train_mulitpathnet_coco.sh
...
model_opt
{
model_conv345_norm : true
model_foveal_exclude : -1
model_het : true
}
/home/elliot/torch/install/bin/luajit: /home/elliot/torch/install/share/lua/5.1/nn/Sequential.lua:29: index out of range
stack traceback:
[C]: in function 'error'
/home/elliot/torch/install/share/lua/5.1/nn/Sequential.lua:29: in function 'remove'
/home/elliot/Devel/multipathnet/models/multipathnet.lua:32: in main chunk
[C]: in function 'dofile'
train.lua:104: in main chunk
[C]: in function 'dofile'
...liot/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x00406670

When I attempt to train with train_coco.sh, I see this.

train_nGPU=1 test_nGPU=1 ./scripts/train_coco.sh
...
Loading proposals at {
1 : "/home/elliot/Devel/multipathnet/data/proposals/coco/sharpmask/train.t7"
2 : "/home/elliot/Devel/multipathnet/data/proposals/coco/sharpmask/val.t7"
}
Done loading proposals

proposal images 123287

dataset images 118287

images 123287

nImages 118287
PANIC: unprotected error in call to Lua API (not enough memory)

Changing train_nGPU=1 to train_nGPU=2 yields the same output but with a different error.
FATAL THREAD PANIC: (pcall) not enough memory
FATAL THREAD PANIC: (write) not enough memory

I'm running on Ubuntu 14.04 LTS with two Titan X GPUs and 64GB of RAM.
Any ideas?

problem:th demo.lua -img ./deepmask/data/testImage.jpg

/home/jp/torch/install/bin/luajit: /home/jp/torch/install/share/lua/5.1/trepl/init.lua:389: /home/jp/torch/install/share/lua/5.1/trepl/init.lua:389: module 'cunn' not found:No LuaRocks module found for cunn
no field package.preload['cunn']
no file '/home/jp/.luarocks/share/lua/5.1/cunn.lua'
no file '/home/jp/.luarocks/share/lua/5.1/cunn/init.lua'
no file '/home/jp/torch/install/share/lua/5.1/cunn.lua'
no file '/home/jp/torch/install/share/lua/5.1/cunn/init.lua'
no file './cunn.lua'
no file '/home/jp/torch/install/share/luajit-2.1.0-beta1/cunn.lua'
no file '/usr/local/share/lua/5.1/cunn.lua'
no file '/usr/local/share/lua/5.1/cunn/init.lua'
no file '/home/jp/.luarocks/lib/lua/5.1/cunn.so'
no file '/home/jp/torch/install/lib/lua/5.1/cunn.so'
no file '/home/jp/torch/install/lib/cunn.so'
no file './cunn.so'
no file '/usr/local/lib/lua/5.1/cunn.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
stack traceback:
[C]: in function 'error'
/home/jp/torch/install/share/lua/5.1/trepl/init.lua:389: in function 'require'
demo.lua:11: in main chunk
[C]: in function 'dofile'
...e/jp/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00405d50

multipathnet train Error

When train multipathnet model. On the line https://github.com/facebookresearch/multipathnet/blob/master/models/multipathnet.lua#L26
local data = torch.load'data/models/imagenet_pretrained_vgg.t7'
and on the line https://github.com/facebookresearch/multipathnet/blob/master/models/multipathnet.lua#L32
for i,v in ipairs{11,10,9,8,1} do classifier:remove(v) end
However, the imagenet_pretrained_vgg.t7 mode's top has only 6 modules which leads to an error on the line https://github.com/torch/nn/blob/master/Sequential.lua#L29 .

An error that may lead to a high precision drop.

Hi guys,

I have trained a model by multipathnet version frcnn on my own dataset with only one foreground class. Compared to the original frcnn model, which has AP at 0.93 on the same training and validation dataset, the multipath version only get AP around 0.7.

After many debug, I think the problem is that the coordinates preprocess mistakes. I change the corresponding code, then the AP go high to 0.94. I will give more details later.

train ResNet-18 on COCO, got error

Hi, I train ResNet-18 on COCO according to README.md:
train_nGPU=1 test_nGPU=1 model=resnet resnet_path=./data/models/resnet/resnet-18.t7 ./scripts/train_coco.sh
By the way, I installed torch7 luajit, But train out of memory, so I
cd ~/torch;
TORCH_LUA_VERSION=LUA51 ./install.sh

I don't know this matter or not.

I got the following error:

Loading proposals at {
1 : "/home/sam/src/multipathnet/data/proposals/coco/sharpmask/train.t7"
2 : "/home/sam/src/multipathnet/data/proposals/coco/sharpmask/val.t7"
}
Done loading proposals

proposal images 123287

dataset images 118287

images 123287

nImages 118287
Loading proposals at {
1 : "/home/sam/src/multipathnet/data/proposals/coco/sharpmask/train.t7"
2 : "/home/sam/src/multipathnet/data/proposals/coco/sharpmask/val.t7"
}
Done loading proposals

proposal images 123287

dataset images 118287

images 123287

nImages 118287
/home/sam/torch/install/bin/lua: /home/sam/torch/install/share/lua/5.1/trepl/init.lua:384: ./loaders/loader.lua:39: expected cdata for arg #1
stack traceback:
[C]: in function 'error'
/home/sam/torch/install/share/lua/5.1/trepl/init.lua:384: in function 'require'
train.lua:121: in main chunk
[C]: in function 'dofile'
.../torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: ?

the whole output is :
d.txt

Error installing fbpython

When I try to install the fbpython library using luarocks on Ubuntu 14.04, I get the error message bellow:

luarocks install fbpython

Installing https://raw.githubusercontent.com/torch/rocks/master/fbpython-0.1-2.rockspec...
Using https://raw.githubusercontent.com/torch/rocks/master/fbpython-0.1-2.rockspec... switching to 'build' mode
Cloning into 'fblualib'...
remote: Counting objects: 211, done.
remote: Compressing objects: 100% (177/177), done.
remote: Total 211 (delta 25), reused 122 (delta 10), pack-reused 0
Receiving objects: 100% (211/211), 221.09 KiB | 0 bytes/s, done.
Resolving deltas: 100% (25/25), done.
Checking connectivity... done.
        git clone https://github.com/facebook/thpp/
        cd thpp/thpp; THPP_NOFB=1 ./build.sh; cd ../..

        cd fblualib/python
        cmake -E make_directory build &&
        cd build &&
        cmake -DROCKS_PREFIX=/home/usr/torch/install/lib/luarocks/rocks/fbpython/0.1-2 \
              -DROCKS_LUADIR=/home/usr/torch/install/lib/luarocks/rocks/fbpython/0.1-2/lua \
              -DROCKS_LIBDIR=/home/usr/torch/install/lib/luarocks/rocks/fbpython/0.1-2/lib \
              .. &&
        make
    
Cloning into 'thpp'...
remote: Counting objects: 458, done.
remote: Total 458 (delta 0), reused 0 (delta 0), pack-reused 458
Receiving objects: 100% (458/458), 129.19 KiB | 0 bytes/s, done.
Resolving deltas: 100% (327/327), done.
Checking connectivity... done.
If you don't have folly or thrift installed, try doing
  THPP_NOFB=1 ./build.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   129    0   129    0     0    391      0 --:--:-- --:--:-- --:--:--   405
100  618k  100  618k    0     0   756k      0 --:--:-- --:--:-- --:--:--  756k
curl: Saved to filename 'googletest-release-1.7.0.zip'
Archive:  googletest-release-1.7.0.zip
c99458533a9b4c743ed51537e25989ea55944908
   creating: googletest-release-1.7.0/
  inflating: googletest-release-1.7.0/CHANGES  
  inflating: googletest-release-1.7.0/CMakeLists.txt  
  inflating: googletest-release-1.7.0/CONTRIBUTORS  
  inflating: googletest-release-1.7.0/LICENSE  
  inflating: googletest-release-1.7.0/Makefile.am  
  inflating: googletest-release-1.7.0/README  
   creating: googletest-release-1.7.0/build-aux/
 extracting: googletest-release-1.7.0/build-aux/.keep  
   creating: googletest-release-1.7.0/cmake/
  inflating: googletest-release-1.7.0/cmake/internal_utils.cmake  
   creating: googletest-release-1.7.0/codegear/
  inflating: googletest-release-1.7.0/codegear/gtest.cbproj  
  inflating: googletest-release-1.7.0/codegear/gtest.groupproj  
  inflating: googletest-release-1.7.0/codegear/gtest_all.cc  
  inflating: googletest-release-1.7.0/codegear/gtest_link.cc  
  inflating: googletest-release-1.7.0/codegear/gtest_main.cbproj  
  inflating: googletest-release-1.7.0/codegear/gtest_unittest.cbproj  
  inflating: googletest-release-1.7.0/configure.ac  
   creating: googletest-release-1.7.0/include/
   creating: googletest-release-1.7.0/include/gtest/
  inflating: googletest-release-1.7.0/include/gtest/gtest-death-test.h  
  inflating: googletest-release-1.7.0/include/gtest/gtest-message.h  
  inflating: googletest-release-1.7.0/include/gtest/gtest-param-test.h  
  inflating: googletest-release-1.7.0/include/gtest/gtest-param-test.h.pump  
  inflating: googletest-release-1.7.0/include/gtest/gtest-printers.h  
  inflating: googletest-release-1.7.0/include/gtest/gtest-spi.h  
  inflating: googletest-release-1.7.0/include/gtest/gtest-test-part.h  
  inflating: googletest-release-1.7.0/include/gtest/gtest-typed-test.h  
  inflating: googletest-release-1.7.0/include/gtest/gtest.h  
  inflating: googletest-release-1.7.0/include/gtest/gtest_pred_impl.h  
  inflating: googletest-release-1.7.0/include/gtest/gtest_prod.h  
   creating: googletest-release-1.7.0/include/gtest/internal/
  inflating: googletest-release-1.7.0/include/gtest/internal/gtest-death-test-internal.h  
  inflating: googletest-release-1.7.0/include/gtest/internal/gtest-filepath.h  
  inflating: googletest-release-1.7.0/include/gtest/internal/gtest-internal.h  
  inflating: googletest-release-1.7.0/include/gtest/internal/gtest-linked_ptr.h  
  inflating: googletest-release-1.7.0/include/gtest/internal/gtest-param-util-generated.h  
  inflating: googletest-release-1.7.0/include/gtest/internal/gtest-param-util-generated.h.pump  
  inflating: googletest-release-1.7.0/include/gtest/internal/gtest-param-util.h  
  inflating: googletest-release-1.7.0/include/gtest/internal/gtest-port.h  
  inflating: googletest-release-1.7.0/include/gtest/internal/gtest-string.h  
  inflating: googletest-release-1.7.0/include/gtest/internal/gtest-tuple.h  
  inflating: googletest-release-1.7.0/include/gtest/internal/gtest-tuple.h.pump  
  inflating: googletest-release-1.7.0/include/gtest/internal/gtest-type-util.h  
  inflating: googletest-release-1.7.0/include/gtest/internal/gtest-type-util.h.pump  
   creating: googletest-release-1.7.0/m4/
  inflating: googletest-release-1.7.0/m4/acx_pthread.m4  
  inflating: googletest-release-1.7.0/m4/gtest.m4  
   creating: googletest-release-1.7.0/make/
  inflating: googletest-release-1.7.0/make/Makefile  
   creating: googletest-release-1.7.0/msvc/
  inflating: googletest-release-1.7.0/msvc/gtest-md.sln  
  inflating: googletest-release-1.7.0/msvc/gtest-md.vcproj  
  inflating: googletest-release-1.7.0/msvc/gtest.sln  
  inflating: googletest-release-1.7.0/msvc/gtest.vcproj  
  inflating: googletest-release-1.7.0/msvc/gtest_main-md.vcproj  
  inflating: googletest-release-1.7.0/msvc/gtest_main.vcproj  
  inflating: googletest-release-1.7.0/msvc/gtest_prod_test-md.vcproj  
  inflating: googletest-release-1.7.0/msvc/gtest_prod_test.vcproj  
  inflating: googletest-release-1.7.0/msvc/gtest_unittest-md.vcproj  
  inflating: googletest-release-1.7.0/msvc/gtest_unittest.vcproj  
   creating: googletest-release-1.7.0/samples/
  inflating: googletest-release-1.7.0/samples/prime_tables.h  
  inflating: googletest-release-1.7.0/samples/sample1.cc  
  inflating: googletest-release-1.7.0/samples/sample1.h  
  inflating: googletest-release-1.7.0/samples/sample10_unittest.cc  
  inflating: googletest-release-1.7.0/samples/sample1_unittest.cc  
  inflating: googletest-release-1.7.0/samples/sample2.cc  
  inflating: googletest-release-1.7.0/samples/sample2.h  
  inflating: googletest-release-1.7.0/samples/sample2_unittest.cc  
  inflating: googletest-release-1.7.0/samples/sample3-inl.h  
  inflating: googletest-release-1.7.0/samples/sample3_unittest.cc  
  inflating: googletest-release-1.7.0/samples/sample4.cc  
  inflating: googletest-release-1.7.0/samples/sample4.h  
  inflating: googletest-release-1.7.0/samples/sample4_unittest.cc  
  inflating: googletest-release-1.7.0/samples/sample5_unittest.cc  
  inflating: googletest-release-1.7.0/samples/sample6_unittest.cc  
  inflating: googletest-release-1.7.0/samples/sample7_unittest.cc  
  inflating: googletest-release-1.7.0/samples/sample8_unittest.cc  
  inflating: googletest-release-1.7.0/samples/sample9_unittest.cc  
   creating: googletest-release-1.7.0/scripts/
  inflating: googletest-release-1.7.0/scripts/fuse_gtest_files.py  
  inflating: googletest-release-1.7.0/scripts/gen_gtest_pred_impl.py  
  inflating: googletest-release-1.7.0/scripts/gtest-config.in  
  inflating: googletest-release-1.7.0/scripts/pump.py  
   creating: googletest-release-1.7.0/scripts/test/
  inflating: googletest-release-1.7.0/scripts/test/Makefile  
  inflating: googletest-release-1.7.0/scripts/upload.py  
  inflating: googletest-release-1.7.0/scripts/upload_gtest.py  
   creating: googletest-release-1.7.0/src/
  inflating: googletest-release-1.7.0/src/gtest-all.cc  
  inflating: googletest-release-1.7.0/src/gtest-death-test.cc  
  inflating: googletest-release-1.7.0/src/gtest-filepath.cc  
  inflating: googletest-release-1.7.0/src/gtest-internal-inl.h  
  inflating: googletest-release-1.7.0/src/gtest-port.cc  
  inflating: googletest-release-1.7.0/src/gtest-printers.cc  
  inflating: googletest-release-1.7.0/src/gtest-test-part.cc  
  inflating: googletest-release-1.7.0/src/gtest-typed-test.cc  
  inflating: googletest-release-1.7.0/src/gtest.cc  
  inflating: googletest-release-1.7.0/src/gtest_main.cc  
   creating: googletest-release-1.7.0/test/
  inflating: googletest-release-1.7.0/test/gtest-death-test_ex_test.cc  
  inflating: googletest-release-1.7.0/test/gtest-death-test_test.cc  
  inflating: googletest-release-1.7.0/test/gtest-filepath_test.cc  
  inflating: googletest-release-1.7.0/test/gtest-linked_ptr_test.cc  
  inflating: googletest-release-1.7.0/test/gtest-listener_test.cc  
  inflating: googletest-release-1.7.0/test/gtest-message_test.cc  
  inflating: googletest-release-1.7.0/test/gtest-options_test.cc  
  inflating: googletest-release-1.7.0/test/gtest-param-test2_test.cc  
  inflating: googletest-release-1.7.0/test/gtest-param-test_test.cc  
  inflating: googletest-release-1.7.0/test/gtest-param-test_test.h  
  inflating: googletest-release-1.7.0/test/gtest-port_test.cc  
  inflating: googletest-release-1.7.0/test/gtest-printers_test.cc  
  inflating: googletest-release-1.7.0/test/gtest-test-part_test.cc  
  inflating: googletest-release-1.7.0/test/gtest-tuple_test.cc  
  inflating: googletest-release-1.7.0/test/gtest-typed-test2_test.cc  
  inflating: googletest-release-1.7.0/test/gtest-typed-test_test.cc  
  inflating: googletest-release-1.7.0/test/gtest-typed-test_test.h  
  inflating: googletest-release-1.7.0/test/gtest-unittest-api_test.cc  
  inflating: googletest-release-1.7.0/test/gtest_all_test.cc  
  inflating: googletest-release-1.7.0/test/gtest_break_on_failure_unittest.py  
  inflating: googletest-release-1.7.0/test/gtest_break_on_failure_unittest_.cc  
  inflating: googletest-release-1.7.0/test/gtest_catch_exceptions_test.py  
  inflating: googletest-release-1.7.0/test/gtest_catch_exceptions_test_.cc  
  inflating: googletest-release-1.7.0/test/gtest_color_test.py  
  inflating: googletest-release-1.7.0/test/gtest_color_test_.cc  
  inflating: googletest-release-1.7.0/test/gtest_env_var_test.py  
  inflating: googletest-release-1.7.0/test/gtest_env_var_test_.cc  
  inflating: googletest-release-1.7.0/test/gtest_environment_test.cc  
  inflating: googletest-release-1.7.0/test/gtest_filter_unittest.py  
  inflating: googletest-release-1.7.0/test/gtest_filter_unittest_.cc  
  inflating: googletest-release-1.7.0/test/gtest_help_test.py  
  inflating: googletest-release-1.7.0/test/gtest_help_test_.cc  
  inflating: googletest-release-1.7.0/test/gtest_list_tests_unittest.py  
  inflating: googletest-release-1.7.0/test/gtest_list_tests_unittest_.cc  
  inflating: googletest-release-1.7.0/test/gtest_main_unittest.cc  
  inflating: googletest-release-1.7.0/test/gtest_no_test_unittest.cc  
  inflating: googletest-release-1.7.0/test/gtest_output_test.py  
  inflating: googletest-release-1.7.0/test/gtest_output_test_.cc  
  inflating: googletest-release-1.7.0/test/gtest_output_test_golden_lin.txt  
  inflating: googletest-release-1.7.0/test/gtest_pred_impl_unittest.cc  
  inflating: googletest-release-1.7.0/test/gtest_premature_exit_test.cc  
  inflating: googletest-release-1.7.0/test/gtest_prod_test.cc  
  inflating: googletest-release-1.7.0/test/gtest_repeat_test.cc  
  inflating: googletest-release-1.7.0/test/gtest_shuffle_test.py  
  inflating: googletest-release-1.7.0/test/gtest_shuffle_test_.cc  
  inflating: googletest-release-1.7.0/test/gtest_sole_header_test.cc  
  inflating: googletest-release-1.7.0/test/gtest_stress_test.cc  
  inflating: googletest-release-1.7.0/test/gtest_test_utils.py  
  inflating: googletest-release-1.7.0/test/gtest_throw_on_failure_ex_test.cc  
  inflating: googletest-release-1.7.0/test/gtest_throw_on_failure_test.py  
  inflating: googletest-release-1.7.0/test/gtest_throw_on_failure_test_.cc  
  inflating: googletest-release-1.7.0/test/gtest_uninitialized_test.py  
  inflating: googletest-release-1.7.0/test/gtest_uninitialized_test_.cc  
  inflating: googletest-release-1.7.0/test/gtest_unittest.cc  
  inflating: googletest-release-1.7.0/test/gtest_xml_outfile1_test_.cc  
  inflating: googletest-release-1.7.0/test/gtest_xml_outfile2_test_.cc  
  inflating: googletest-release-1.7.0/test/gtest_xml_outfiles_test.py  
  inflating: googletest-release-1.7.0/test/gtest_xml_output_unittest.py  
  inflating: googletest-release-1.7.0/test/gtest_xml_output_unittest_.cc  
  inflating: googletest-release-1.7.0/test/gtest_xml_test_utils.py  
  inflating: googletest-release-1.7.0/test/production.cc  
  inflating: googletest-release-1.7.0/test/production.h  
   creating: googletest-release-1.7.0/xcode/
   creating: googletest-release-1.7.0/xcode/Config/
  inflating: googletest-release-1.7.0/xcode/Config/DebugProject.xcconfig  
  inflating: googletest-release-1.7.0/xcode/Config/FrameworkTarget.xcconfig  
  inflating: googletest-release-1.7.0/xcode/Config/General.xcconfig  
  inflating: googletest-release-1.7.0/xcode/Config/ReleaseProject.xcconfig  
  inflating: googletest-release-1.7.0/xcode/Config/StaticLibraryTarget.xcconfig  
  inflating: googletest-release-1.7.0/xcode/Config/TestTarget.xcconfig  
   creating: googletest-release-1.7.0/xcode/Resources/
  inflating: googletest-release-1.7.0/xcode/Resources/Info.plist  
   creating: googletest-release-1.7.0/xcode/Samples/
   creating: googletest-release-1.7.0/xcode/Samples/FrameworkSample/
  inflating: googletest-release-1.7.0/xcode/Samples/FrameworkSample/Info.plist  
   creating: googletest-release-1.7.0/xcode/Samples/FrameworkSample/WidgetFramework.xcodeproj/
  inflating: googletest-release-1.7.0/xcode/Samples/FrameworkSample/WidgetFramework.xcodeproj/project.pbxproj  
  inflating: googletest-release-1.7.0/xcode/Samples/FrameworkSample/runtests.sh  
  inflating: googletest-release-1.7.0/xcode/Samples/FrameworkSample/widget.cc  
  inflating: googletest-release-1.7.0/xcode/Samples/FrameworkSample/widget.h  
  inflating: googletest-release-1.7.0/xcode/Samples/FrameworkSample/widget_test.cc  
   creating: googletest-release-1.7.0/xcode/Scripts/
  inflating: googletest-release-1.7.0/xcode/Scripts/runtests.sh  
  inflating: googletest-release-1.7.0/xcode/Scripts/versiongenerate.py  
   creating: googletest-release-1.7.0/xcode/gtest.xcodeproj/
  inflating: googletest-release-1.7.0/xcode/gtest.xcodeproj/project.pbxproj  
-- The C compiler identification is GNU 4.8.4
-- The CXX compiler identification is GNU 4.8.4
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Found Torch7 in /home/ggarbace/torch/install
-- Performing Test HAS_NO_AS_NEEDED
-- Performing Test HAS_NO_AS_NEEDED - Success
CMake Error at /usr/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:108 (message):
  REQUIRED_ARGS (missing: GLOG_INCLUDE_DIR GLOG_LIBRARY)
Call Stack (most recent call first):
  /usr/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:315 (_FPHSA_FAILURE_MESSAGE)
  cmake/FindGlog.cmake:21 (FIND_PACKAGE_HANDLE_STANDARD_ARGS)
  CMakeLists.txt:111 (FIND_PACKAGE)


-- Configuring incomplete, errors occurred!
See also "/tmp/luarocks_fbpython-0.1-2-678/fblualib/thpp/thpp/build/CMakeFiles/CMakeOutput.log".
-- The C compiler identification is GNU 4.8.4
-- The CXX compiler identification is GNU 4.8.4
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
CMake Error at /usr/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:108 (message):
  REQUIRED_ARGS (missing: GLOG_INCLUDE_DIR GLOG_LIBRARY)
Call Stack (most recent call first):
  /usr/share/cmake-2.8/Modules/FindPackageHandleStandardArgs.cmake:315 (_FPHSA_FAILURE_MESSAGE)
  cmake/FindGlog.cmake:21 (FIND_PACKAGE_HANDLE_STANDARD_ARGS)
  CMakeLists.txt:27 (FIND_PACKAGE)


-- Configuring incomplete, errors occurred!
See also "/tmp/luarocks_fbpython-0.1-2-678/fblualib/fblualib/python/build/CMakeFiles/CMakeOutput.log".

Error: Build error: Failed building.

The error complains about missing attributes GLOG_INCLUDE_DIR GLOG_LIBRARY, however I have installed the Google glog library on my system. Any idea on how I can fix this? Thank you!

demo.lua out of memory problem

Hi,
I'm trying to run the demo.lua but return to out of memory problem
here is my mechine condition:
ubuntu 16.04 i5-6500 with 8G memory
GeForce GTX 950 Driver Version: 375.39 cuda8.0

error infomation is following
THCudaCheck FAIL file=/home/jam/torch/extra/cutorch/lib/THC/generic/THCStorage.cu line=66 error=2 : out of memory
/home/jam/torch/install/bin/luajit: /home/jam/torch/install/share/lua/5.1/nn/Container.lua:67:
In 1 module of nn.ParallelTable:
In 2 module of nn.Sequential:
In 2 module of nn.Sequential:
In 1 module of nn.Sequential:
In 1 module of nn.ConcatTable:
In 4 module of nn.Sequential:
/home/jam/torch/install/share/lua/5.1/nn/THNN.lua:110: cuda runtime error (2) : out of memory at /home/jam/torch/extra/cutorch/lib/THC/generic/THCStorage.cu:66
stack traceback:
[C]: in function 'v'
/home/jam/torch/install/share/lua/5.1/nn/THNN.lua:110: in function 'SpatialConvolutionMM_updateOutput'
...am/torch/install/share/lua/5.1/nn/SpatialConvolution.lua:79: in function <...am/torch/install/share/lua/5.1/nn/SpatialConvolution.lua:76>
[C]: in function 'xpcall'
/home/jam/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/jam/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function </home/jam/torch/install/share/lua/5.1/nn/Sequential.lua:41>
[C]: in function 'xpcall'
/home/jam/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/jam/torch/install/share/lua/5.1/nn/ConcatTable.lua:11: in function </home/jam/torch/install/share/lua/5.1/nn/ConcatTable.lua:9>
[C]: in function 'xpcall'
...
/home/jam/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function </home/jam/torch/install/share/lua/5.1/nn/Sequential.lua:41>
[C]: in function 'xpcall'
/home/jam/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/jam/torch/install/share/lua/5.1/nn/ParallelTable.lua:12: in function 'forward'
./ImageDetect.lua:108: in function 'memoryEfficientForward'
./ImageDetect.lua:176: in function 'detect'
demo.lua:75: in main chunk
[C]: in function 'dofile'
.../jam/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00405d50

WARNING: If you see a stack trace below, it doesn't point to the place where this error occurred. Please use only the one above.
stack traceback:
[C]: in function 'error'
/home/jam/torch/install/share/lua/5.1/nn/Container.lua:67: in function 'rethrowErrors'
/home/jam/torch/install/share/lua/5.1/nn/ParallelTable.lua:12: in function 'forward'
./ImageDetect.lua:108: in function 'memoryEfficientForward'
./ImageDetect.lua:176: in function 'detect'
demo.lua:75: in main chunk
[C]: in function 'dofile'
.../jam/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00405d50

Any help?

problems installing fbpython and building PythonAPI

Hi,

I had some problems when installing fbpython and found the problem to be that I did not have thpp installed. Should the readme be updated to warn other users?

https://github.com/facebook/thpp

Also I ran into issues when following the step where you cd into PythonAPI and make. Turns out I did not have Cython. I installed this using pip.

sudo apt-get install python-pip
sudo pip install Cython

I am still trying to locate all of the data in order to finish the setup but otherwise I think everything is going well. I hope this is helpful.

Thanks,

Ryan

A guid to prepare data to train a detection model?

I notice that the training has been kind of "hard coded" to different versions of pascal voc and coco datasets. I'm trying to figure out the data flow and data format requirement to run training on "new"
data. Still have some problems in loading and preparing data before training (Even on voc or coco data). Could anyone give some advices to help me to build up the process.

Now, I have some images and corresponding bounding box annotations. If I want to train on this data, I need to generate some proposals, i.e. 1000/image. Put annotations and proposals in "least required" Torch formats.

I think I have to write some pieces of code to implement

  • data checker / generator
  • configurations

I hope to be a contributor. ;-)

COCO_train2014_000000256028.jpg: No such file or directory

after many tries, mutipathnet begins to train( I use command
model=resnet resnet_path=./data/models/resnet/resnet-18.t7 ./scripts/train_coco.sh
),
but it can't find COCO_train2014_000000256028.jpg.
I download the
http://msvocds.blob.core.windows.net/coco2014/train2014.zip
http://msvocds.blob.core.windows.net/coco2014/val2014.zip
and extract to
~/datasets/mscoco/

I don't know why, can you help me?

Training epoch 1/3200
/home/sam/torch/install/bin/lua: .../sam/torch/install/share/lua/5.1/threads/threads.lua:183: [thread 1 callback] /home/sam/torch/install/share/lua/5.1/image/init.lua:347: /home/sam/datasets/mscoco/train2014/COCO_train2014_000000256028.jpg: No such file or directory
stack traceback:
[C]: in function 'error'
/home/sam/torch/install/share/lua/5.1/image/init.lua:347: in function </home/sam/torch/install/share/lua/5.1/image/init.lua:333>
(tail call): ?
./loaders/concatloader.lua:62: in function <./loaders/concatloader.lua:60>
(tail call): ?
./BatchProviderBase.lua:19: in function 'getImages'
./BatchProviderROI.lua:128: in function 'sample'
./data.lua:64: in function <./data.lua:63>
(tail call): ?
(tail call): ?
...lua/5.1/torchnet/dataset/paralleldatasetiterator.lua:108: in function <...lua/5.1/torchnet/dataset/paralleldatasetiterator.lua:107>
(tail call): ?
[C]: in function 'xpcall'
.../sam/torch/install/share/lua/5.1/threads/threads.lua:234: in function 'callback'
...me/sam/torch/install/share/lua/5.1/threads/queue.lua:65: in function <...me/sam/torch/install/share/lua/5.1/threads/queue.lua:41>
[C]: in function 'pcall'
...me/sam/torch/install/share/lua/5.1/threads/queue.lua:40: in function 'dojob'
[string " local Queue = require 'threads.queue'..."]:15: in main chunk
stack traceback:
[C]: in function 'error'
.../sam/torch/install/share/lua/5.1/threads/threads.lua:183: in function 'dojob'
...lua/5.1/torchnet/dataset/paralleldatasetiterator.lua:179: in function '(for generator)'
./engines/fboptimengine.lua:55: in function <./engines/fboptimengine.lua:35>
(tail call): ?
train.lua:364: in main chunk
[C]: in function 'dofile'
.../torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: ?

How to resume from a checkpoint?

Hello, I want to resume training from a checkpoint, tried to set opt.checkpoint=true, then I got error:

/root/torch/install/bin/luajit: train.lua:227: attempt to index global 'checkpoint' (a nil value)
stack traceback:
train.lua:227: in function 'hooks'
./engines/fboptimengine.lua:50: in function 'train'
train.lua:363: in main chunk
[C]: in function 'dofile'
/root/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x004064f0

demo.lua run-time error :running DeepMask with MultiPathNet on provided image

Hi,
I just started with multipathnet and I am having issues with provided example image file.

THCudaCheck FAIL file=/tmp/luarocks_cutorch-scm-1-4692/cutorch/lib/THC/THCGeneral.c line=91 error=30 : unknown error
/media/public1/DED47AC8D47AA307/chenwenhe/objectdetection/torch/install/bin/luajit: ...jectdetection/torch/install/share/lua/5.1/trepl/init.lua:384: ...jectdetection/torch/install/share/lua/5.1/trepl/init.lua:384: ...jectdetection/torch/install/share/lua/5.1/trepl/init.lua:384: cuda runtime error (30) : unknown error at /tmp/luarocks_cutorch-scm-1-4692/cutorch/lib/THC/THCGeneral.c:91
stack traceback:
[C]: in function 'error'
...jectdetection/torch/install/share/lua/5.1/trepl/init.lua:384: in function 'require'
demo.lua:11: in main chunk
[C]: in function 'dofile'
...tion/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x00405d50

Any suggestions?

How to generate proposals t7 files?

How can I generate train.t7 and val.t7 from deepmask output?
deepmask/evalPerImage.lua generates multiple JSON|t7 files. (divided into chunks of 500 images each)

Proposals t7 files

Hello how to generate proposals t7 files from deepmask output on my data? I use code in issue #16. When I try to train on 1 GPU I see error:
/home/alex/torch/install/bin/luajit: ./modules/BBoxRegressionCriterion.lua:27: bad argument #1 to 'copy' (sizes do not match at /tmp/luarocks_cutorch-scm-1-4541/cutorch/lib/THC/THCTensorCopy.cu:31) stack traceback: [C]: in function 'copy' ./modules/BBoxRegressionCriterion.lua:27: in function 'updateOutput' ...lex/torch/install/share/lua/5.1/nn/ParallelCriterion.lua:23: in function 'forward' ./engines/fboptimengine.lua:61: in function 'train' train.lua:363: in main chunk [C]: in function 'dofile' ...alex/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk [C]: at 0x00405d50

If I fill all numbers in all bboxes with same number (ex. [10 10 10 10]) training loads all CPUs (100%), but not GPU (0%). Looks strange
Thanks for any hints.

run demo.lua error

Hello, after installation, I ran demo.lua, I got:
/home/sam/torch/install/bin/luajit: ./deepmask/InferSharpMask.lua:67: attempt to get length of field 'refs' (a nil value)
stack traceback:
./deepmask/InferSharpMask.lua:67: in function '__init'
/home/sam/torch/install/share/lua/5.1/torch/init.lua:91: in function </home/sam/torch/install/share/lua/5.1/torch/init.lua:87>
[C]: in function 'Infer'
./demo.lua:54: in main chunk
[C]: in function 'dofile'
.../sam/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x00406670

Broken download link in Data Preparation area of Readme.md

Hello,

it appears that a couple of the links named "here" in the data preparation are not working. Below I pasted what I am trying to describe.

models folder should contain AlexNet and VGG pretrained imagenet files downloaded from here .

proposals should contain t7 files downloaded from here

Can you please provide the links?

Thanks,

Ryan

Error running demo.lua

I am having the following issue trying to run the demo script. Could this be caused by a missing dependency?

th demo.lua -img ./deepmask/data/testImage.jpg

Directories:
multipathnet/
|-->deepmask/
|-->data/

/torch/install/share/lua/5.1/nn/THNN.lua:109: wrong number of arguments for function call
stack traceback:
        [C]: in function 'v'
        ...popov/CompVision/torch/install/share/lua/5.1/nn/THNN.lua:109: in function 'SpatialMaxPooling_updateOutput'
        ...ion/torch/install/share/lua/5.1/nn/SpatialMaxPooling.lua:42: in function <...ion/torch/install/share/lua/5.1/nn/SpatialMaxPooling.lua:31>
        [C]: in function 'xpcall'
        .../CompVision/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
        ...CompVision/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function 'updateOutput'
        ./modules/NoBackprop.lua:19: in function <./modules/NoBackprop.lua:18>
        [C]: in function 'xpcall'
        .../CompVision/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
        ...CompVision/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function <...CompVision/torch/install/share/lua/5.1/nn/Sequential.lua:41>
        [C]: in function 'xpcall'
        .../CompVision/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
        ...pVision/torch/install/share/lua/5.1/nn/ParallelTable.lua:12: in function <...pVision/torch/install/share/lua/5.1/nn/ParallelTable.lua:10>
        [C]: in function 'xpcall'
        .../CompVision/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
        ...CompVision/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function 'forward'
        ./models/model_utils.lua:131: in function 'testModel'
        demo.lua:41: in main chunk
        [C]: in function 'dofile'
        ...sion/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
        [C]: at 0x00406670

Can not reproduce the results

I run the code model=data/models/caffenet_fast_rcnn_iter_40000.t7 ./scripts/eval_fastrcnn_voc2007.sh
get:
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.230
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.492
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.189
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.015
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.112
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.306
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.263
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.355
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.364
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.080
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.257
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.441

{
year : "2007"
test_nsamples : -1
test_augment : false
proposals : "selective_search"
test_model : "data/models/caffenet_fast_rcnn_iter_40000.t7"
max_size : 1000
test_best_proposals_number : 2000
test_save_res : ""
test_add_nosoftmax : false
test_save_raw : ""
disable_memory_efficient_forward : false
test_data_offset : -1
proposal_dir : "./data/proposals"
test_save_res_prefix : ""
test_bbox_voting_nms_threshold : 0.5
test_num_iterative_loc : 1
scale : 600
dataset : "pascal"
test_min_proposal_size : 2
test_nGPU : 8
test_load_aboxes : ""
test_bbox_voting : false
test_just_save_boxes : false
transformer : "RossTransformer"
test_bbox_voting_score_pow : 1
test_set : "test"
test_use_rbox_scores : false
test_nms_threshold : 0.3
}
dataset: pascal_test2007
proposals_path: {
1 : "/data/scratch/gaop/deep_learning_detection/multipathnet/data/proposals/VOC2007/selective_search/test.t7"
}

Fine tuning

How to fine tune your model?
I don't have sufficient data to retrain your model from scratch.
I want to fine tune your model on my data which has only two classes ?

Link to model files missing

From the readme

models folder should contain AlexNet and VGG pretrained imagenet files downloaded from here

Please provide the link for the two model files.

Index error when preparing models for training.

I wanted to test training MultiPathNet (would like to train with my own classes), so I used:

train_nGPU=1 test_nGPU=1 ./scripts/train_multipathnet_coco.sh

similar to as described on the project main page. When running I get an error on the preparation stage in multipathnet.lua, when removing layers(?) from the imagenet pretrained model.

This line:
for i,v in ipairs{11,10,9,8,1} do classifier:remove(v) end

fails with an index out of range error:

..more_directories../torch/install/share/lua/5.1/nn/Sequential.lua:29: index out of range
stack traceback:
	[C]: in function 'error'
	.../alexander/torch/install/share/lua/5.1/nn/Sequential.lua:29: in function 'remove'
	/home/alexander/multipathnet/models/multipathnet.lua:34: in main chunk
	[C]: in function 'dofile'
	train.lua:104: in main chunk
	[C]: in function 'dofile'
	...nder/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
	[C]: at 0x00406670

Out of curiosity, I checked, and classifier:remove(1) can only be called 7 times before going out of range (implying anything over 7 is out-of-bounds).

What might be going wrong here?

Is it possible to run multipathnet on Nvidia Jetson TK1?

The compute capability of this board is 3.2,but requirements of multipathnet claims for compute capability 3.5+. If i install multipathnet on this board, what will happen? Thanks !

I have not jet bought this board.

some information about JetsonTK1:

NVIDIA Kepler GPU with 192 CUDA cores
• NVIDIA 4-Plus-1 quad-core ARM Cortex-A15 CPU
• 2 GB x 16 memory with 64-bit width and 16 GB 4.51 eMMC memory

Tegra K1 SoC

• NVIDIA Kepler GPU with 192 CUDA cores
• NVIDIA 4-Plus-1 quad-core ARM Cortex-A15 CPU
• 2 GB x 16 memory with 64-bit width
• 16 GB 4.51 eMMC memory
• Half mini-PCIE slot 1
• Full size SD/MMC connector
• 1 USB 2.0 port, micro AB 1
• 1 Full-size HDMI port
• RS232 serial port
• 1 ALC5639 Realtek Audio codec with Mic in and Line out
• 1 RTL8111GS Realtek GigE LAN
• 1 SATA data port
• SPI 4MByte boot flash

CUDA Developer Information

• CUDA Version: 6.0
• CUDA Cores:

Computational Capability: sm_32
Number of cores: 192

• CUDA libraries:
cudart, cufft, cublas, curand, cusparse, npp, opencv4tegra for registered developers
Visionworks: available on request

• CUDA tools:

for local development, all the command line tools (compiler, cuda-gdb, cuda-memcheck, command-line profiler
for remote development, all the command-line tools and the visual tools too (NSight Eclipse Edition, Visual Profiler)

Out of memory :(

My environment setting is GTX960 2G, Ubuntu16.04, cuda8.0

I have tried to run demo

$ th demo.lua -img ./deepmask/data/testImage.jpg

...

THCudaCheck FAIL file=/home/tzatter/torch/extra/cutorch/lib/THC/generic/THCStorage.cu line=40 error=2 : out of memory
/home/tzatter/torch/install/bin/luajit: /home/tzatter/torch/install/share/lua/5.1/nn/Container.lua:67:
In 6 module of nn.Sequential:
In 1 module of nn.Sequential:
In 2 module of nn.Sequential:
/home/tzatter/torch/install/share/lua/5.1/nn/CAddTable.lua:13: cuda runtime error (2) : out of memory at /home/tzatter/torch/extra/cutorch/lib/THC/generic/THCStorage.cu:40
stack traceback:
[C]: in function 'resizeAs'
/home/tzatter/torch/install/share/lua/5.1/nn/CAddTable.lua:13: in function </home/tzatter/torch/install/share/lua/5.1/nn/CAddTable.lua:9>
[C]: in function 'xpcall'
/home/tzatter/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/tzatter/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function </home/tzatter/torch/install/share/lua/5.1/nn/Sequential.lua:41>
[C]: in function 'xpcall'
/home/tzatter/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/tzatter/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function </home/tzatter/torch/install/share/lua/5.1/nn/Sequential.lua:41>
[C]: in function 'xpcall'
/home/tzatter/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/tzatter/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function 'forward'
./deepmask/InferSharpMask.lua:103: in function 'forward'
demo.lua:65: in main chunk
[C]: in function 'dofile'
...kasi/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x00405d50

WARNING: If you see a stack trace below, it doesn't point to the place where this error occurred. Please use only the one above.
stack traceback:
[C]: in function 'error'
/home/tzatter/torch/install/share/lua/5.1/nn/Container.lua:67: in function 'rethrowErrors'
/home/tzatter/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function 'forward'
./deepmask/InferSharpMask.lua:103: in function 'forward'
demo.lua:65: in main chunk
[C]: in function 'dofile'
...kasi/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x00405d50

evaluate error

th demo.lua goes well
when I run the evaluate:
model=data/models/caffenet_fast_rcnn_iter_40000.t7 ./scripts/eval_fastrcnn_voc2007.sh
model=data/models/vgg_fast_rcnn_iter_40000.t7 ./scripts/eval_fastrcnn_voc2007.sh
or
test_nGPU=4 test_nsamples=5000 ./scripts/eval_coco.sh
I got the similar error

proposals_path: {
  1 : "/mnt/geekvc/multipathnet/data/proposals/VOC2007/selective_search/test.t7"
}
/home/wangty/torch/install/bin/luajit: /home/wangty/torch/install/share/lua/5.1/torch/File.lua:370: table index is nil
stack traceback:
        /home/wangty/torch/install/share/lua/5.1/torch/File.lua:370: in function 'readObject'
        /home/wangty/torch/install/share/lua/5.1/nn/Module.lua:192: in function 'read'
        /home/wangty/torch/install/share/lua/5.1/torch/File.lua:351: in function 'readObject'
        /home/wangty/torch/install/share/lua/5.1/torch/File.lua:369: in function 'readObject'
        /home/wangty/torch/install/share/lua/5.1/torch/File.lua:369: in function 'readObject'
        /home/wangty/torch/install/share/lua/5.1/nn/Module.lua:192: in function 'read'
        /home/wangty/torch/install/share/lua/5.1/torch/File.lua:351: in function 'readObject'
        /home/wangty/torch/install/share/lua/5.1/torch/File.lua:369: in function 'readObject'
        /home/wangty/torch/install/share/lua/5.1/torch/File.lua:369: in function 'readObject'
        /home/wangty/torch/install/share/lua/5.1/nn/Module.lua:192: in function 'read'
        /home/wangty/torch/install/share/lua/5.1/torch/File.lua:351: in function 'readObject'
        ...
        /home/wangty/torch/install/share/lua/5.1/torch/File.lua:351: in function 'readObject'
        /home/wangty/torch/install/share/lua/5.1/torch/File.lua:369: in function 'readObject'
        /home/wangty/torch/install/share/lua/5.1/torch/File.lua:409: in function 'load'
        /mnt/geekvc/multipathnet/models/model_utils.lua:273: in function 'load'
        /mnt/geekvc/multipathnet/test_runner.lua:31: in function '_setup'
        /mnt/geekvc/multipathnet/test_runner.lua:53: in function 'setup'
        run_test.lua:57: in main chunk
        [C]: in function 'dofile'
        ...ngty/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
        [C]: at 0x00406620

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.