Code Monkey home page Code Monkey logo

pose-attention's Introduction

Multi-Context Attention for Human Pose Estimation

This repository includes Torch code for evaluation and training of the network presented in:

Xiao Chu, Wei Yang, Wanli Ouyang, Cheng Ma, Alan L. Yuille, Xiaogang Wang, Multi-Context Attention for Human Pose Estimation, CVPR, 2017. (arXiv preprint)

The code is developed upon Stacked Hourglass Network.

Installation

To run this code, the following packages must be installed:

  • Torch7
  • hdf5 (and the torch-hdf5 package)
  • cudnn
  • qlua (for displaying results)
  • matio: to save predictions in Matlab's .mat file.

Testing

For testing, please go to the test directory and follow the README for instructions.

Training

For training, please go to the train directory and follow the README for instructions.

pose-attention's People

Contributors

bearpaw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pose-attention's Issues

A question in "pose-attention/train/models/layers/AttentionPartsCRF.lua"

Hey,
I'm trying to re-implemented your method using tensorflow, while I'm not very proficient in torch. I'm quite confused about *** line 24 *** in AttentionPartsCRF.lua, does that mean just copy the same tensor ( *** Q[itersize] ) on its channel for many( numIn ***) times?
I'm quite confused about that and hope to get an reply. Thanks. @bearpaw

How to calculate PCKh in MPII Evaluation for predict-test results?

Thanks to your README, I've accomplished test on MPII dataset and got a '.mat' file, which is a 11731163 matrix. But MPII README confuses me. They didn't explicitly indicate the input for their 'evaluatePCKh.mโ€™.
So how can I convert '.mat' to calculate the metric, just as shown on the leaderboard?

train

Hi,
When I was training, the accuracy of the training set was very high, but the accuracy of the validation set was very low, about 30%. I trained a total of 100 epochs, increased the number of iterations in the source code, and correspondingly reduced the batchsize, but I can ensure that every picture has been trained, so who can tell me why? thx.

test error!

Thanks for you code! could you give me some advice? thanks a lot!

sun@sunwin:~/0newcodedown/pose-attention-master/test$ qlua main.lua demo
qlua: /home/sun/torch/install/share/lua/5.1/hdf5/ffi.lua:42: Error: unable to locate HDF5 header file at /usr/local/HDF_Group/HDF5/1.8.18/include;/hdf5.h
stack traceback:
[C]: at 0x7f6031a5bf50
[C]: in function 'error'
/home/sun/torch/install/share/lua/5.1/hdf5/ffi.lua:42: in function 'loadHDF5Header'
/home/sun/torch/install/share/lua/5.1/hdf5/ffi.lua:60: in main chunk
[C]: in function 'dofile'
/home/sun/torch/install/share/lua/5.1/torch/init.lua:54: in function 'include'
/home/sun/torch/install/share/lua/5.1/hdf5/init.lua:29: in main chunk
[C]: in function 'require'
/home/sun/0newcodedown/pose-attention-master/test/util.lua:7: in main chunk
[C]: in function 'dofile'
main.lua:9: in main chunk

out of memory

Hello , I download your pretrain model(model.t7).But when I run 'qlua main.lua demo ',the terminal give me an error :
{{{ THCudaCheck FAIL file=/home/zhangjiqing/torch/extra/cutorch/lib/THC/generic/THCStorage.cu line=66 error=2 : out of memory
qlua: ...zhangjiqing/torch/install/share/lua/5.1/nn/Container.lua:67:
In 1 module of nn.Sequential:
In 1 module of nn.ConcatTable:
In 1 module of nn.Sequential:
/home/zhangjiqing/torch/install/share/lua/5.1/nn/THNN.lua:110: cuda runtime error (2) : out of memory at /home/zhangjiqing/torch/extra/cutorch/lib/THC/generic/THCStorage.cu:66 }}}

Is it just out of memory of GPU ? Can I turn the batchsize to a small value yet using your model.t7 ? Or could you provide a samll batchsize for me?
Thanks!

Multi-Resolution Attention

Hi,
In your paper,multi-resolution attention maps are generated from features of different scales.But in your train script , only original resolution generates attention map ๏ผŒ is it my fault?

Out of memory problem

I see that in train.sh, it use 4x12G GPUs for training, but i only have one 12G GPU(1080 Ti), so how can i reduce the GPU cost?

i try change '-nGPU 4' to '-nGPU 1', and add '-trainBatch 4' in train.sh, but it seems doesn't work because the model 'converting the model to CUDA' again and again, just like:

_2017-11-07_21-39-01

So how can i do?Thanks.

confused about refined feature map

hi,
I'm confused about the refined feature map. Should it be h2^att rather than h1^att?
In your paper, Equ.10 describes the refined feature h2^att is generated from h1^att, but in section 5.2.4, only the h1^att is used as refined feature and to do final prediction.

image
image

This is also conflict with Figure 4 in which the h2^att is the refined feature and used for heatmap prediction.

image
If h1^att is the refined feature, what does the h2^att used for?

Thank you~

qlua

Hi,

How to install qlua in ubuntu?

test bug

Great work!
Thank you for sharing this code.
I encounter a problem when I test your code named "main.lua". I input this command "qlua main.lua demo" in command of Ubuntu. The bug is as follows:

HDF5-DIAG: Error detected in HDF5 (1.8.11) thread 139704096257792:
#000: H5F.c line 586 in H5Fopen(): interface initialization failed
major: Function entry/exit
minor: Unable to initialize object
#1: H5F.c line 112 in H5F__init_pub_interface(): unable to initialize interface
major: File accessibilty
minor: Unable to initialize object
#2: ../../../src/H5I.c line 344 in H5I_register_type(): invalid hash size
major: Object atom
minor: Out of range
qlua: /home/yzl/torch/install/share/lua/5.2/hdf5/file.lua:12: HDF5File: fileID -1 is not valid
stack traceback:
[C]: in ?
[C]: in function 'error'
/home/yzl/torch/install/share/lua/5.2/hdf5/file.lua:12: in function '__init'
/home/yzl/torch/install/share/lua/5.2/torch/init.lua:91: in function </home/yzl/torch/install/share/lua/5.2/torch/init.lua:87>
[C]: in function 'HDF5File'
/home/yzl/torch/install/share/lua/5.2/hdf5/file.lua:147: in function </home/yzl/torch/install/share/lua/5.2/hdf5/file.lua:145>
(...tail calls...)
.../AI_Challenge/skeleton/Code/pose-attention/test/util.lua:19: in function 'loadAnnotations'
main.lua:113: in main chunk
HDF5: infinite loop closing library
D,T,FD,P,FD,P,FD,P,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E,E

Thank you!

About the batch size and the log.

Hi @bearpaw
Very nice work!
I have not found the batch size for MPII and LSP dataset in your CVPR 2017 paper.
Would you please provide your train.log valid.log for reference , the batch size for MPII and LSP dataset and the train/valid iterations per epoch for MPII and LSP dataset in your paper.

Thank you.
Feng.

Few Doubt's in the model

Hi,
I had a few doubts in model creation

  • Firstly is there any specific reason for having more residual modules before stacked hourglasses compared to Newell's Model Creation. (He only uses 3 residuals while hg-attention model uses 6 residuals). Do these extra residual modules provide better performance?

  • Secondly if I want to create a model with less stacks (say nStacks 2 would I need to make changes to the implementation since you have different attention mechanism for later stacks as in following lines
    if i>4 then att = AttentionPartsCRF(opt.nFeats, ll2, opt.LRNKer, 3, 0) tmpOut = AttentionPartsCRF(opt.nFeats, att, opt.LRNKer, 3, 1)

Thank You

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.