Official implementation of ACTION-Net: Multipath Excitation for Action Recognition (CVPR'21).
- EgoGesture data folder structure
|-frames
|---Subject01
|------Scene1
|---------Color1
|------------rgb1
|---------------000001.jpg
......
|-labels
|---Subject01
|------Scene1
|---------Group1.csv
......
- Something-Something V2
|-frames
|---1
|------000001.jpg
|------000002.jpg
|------000003.jpg
......
- Jester
|-frames
|---1
|------000001.jpg
|------000002.jpg
|------000003.jpg
......
Provided in the action.Dockerfile
Annotation files are at this link. Please follow the annotation files to construct the frame path.
sh train_ego_8f.sh 0,1,2,3 if you use four gpus
Our codes are built based on previous repos TSN, TSM and TEA
Currently, we do not provide the pretrained models since we reconstruct the structure and rename our modules of ACTION for public release. It should be able to get the similar performance indicated in the paper using the codes provided above.