Code Monkey home page Code Monkey logo

hipe's Introduction

HIerarchical Prototype Explainer (HIPE)

This repository contains the official implementation of our work Hierarchical Explanations for Video Action Recognition.

Abstract: To interpret deep neural networks, one main approach is to dissect the visual input and find the prototypical parts responsible for the classification. However, existing methods often ignore the hierarchical relationship between these prototypes, and thus can not explain semantic concepts at both higher level (e.g., water sports) and lower level (e.g., swimming). In this paper inspired by human cognition system, we leverage hierarchal information to deal with uncertainty: When we observe water and human activity, but no definitive action it can be recognized as the water sports parent class. Only after observing a person swimming can we definitively refine it to the swimming action. To this end, we propose HIerarchical Prototype Explainer (HIPE) to build hierarchical relations between prototypes and classes. HIPE enables a reasoning process for video action classification by dissecting the input video frames on multiple levels of the class hierarchy, our method is also applicable to other video tasks. The faithfulness of our method is verified by reducing accuracy-explainability trade off on ActivityNet and UCF-101 while providing multi-level explanations.

Overview of HIPE:

Visual Examples

Single Level

alt-text-1 alt-text-2alt-text-3 alt-text-4 alt-text-5

Leftmost: Original Video. Second: Parts in the original video that are highly activated by the prototype. Third: Saliency map in the original video that are highly activated by the prototype. Fourth: Training videos where prototypes come from. Rightmost: Prototypes.

Hierarchical

Grandparent Level: Human-object interaction

alt-text-1 alt-text-2alt-text-3 alt-text-4 alt-text-5

Parent Level: Self-grooming

alt-text-1 alt-text-2alt-text-3 alt-text-4 alt-text-5

Child Level: Blow dry

alt-text-1 alt-text-2alt-text-3 alt-text-4 alt-text-5

Leftmost: Original Video. Second: Parts in the original video that are highly activated by the prototype. Third: Saliency map in the original video that are highly activated by the prototype. Fourth: Training videos where prototypes come from. Rightmost: Prototypes.

Dataset Preparation

UCF-101

  • Download videos and train/test splits here.
  • Convert from avi to jpg files using util_scripts/generate_video_jpgs.py
python -m util_scripts.generate_video_jpgs avi_video_dir_path jpg_video_dir_path ucf101
  • Generate annotation file in json format similar to ActivityNet using util_scripts/ucf101_json.py
    • annotation_dir_path includes classInd.txt, trainlist0{1, 2, 3}.txt, testlist0{1, 2, 3}.txt
python -m util_scripts.ucf101_json annotation_dir_path jpg_video_dir_path dst_json_path

Hierarchical UCF-101

  • We define hierarchy for UCF-101 with the number of classes at level one, two, and three being 5, 20, and 101 respectively. The classes at the third level of the hierarchy are the 101 original classes of the dataset. The full hierarchy is included in the file UCF-101_hierarchy.csv

Pre-trained Models

ResNet-3D Backbone

Pre-trained 3D-ResNet models are available here. We used r3d18_K_200ep.pth trained on kinetics 700 (K) and finetuned it on UCF-101 in our experiments.

Hierarchical Action Embeddings

We computed the hierarchical action embeddings for the hierarchy we define for UCF-101 in UCF-101_hierarchy.csv following Teng.et.al. The precomputed hyperbolic action embeddings are uploaded in the file UCF101_two_level_emb.pth

Running the Code

  • For training the model
python main.py --root_path ~/data --video_path ~/UCF-101-JPEG --annotation_path ucf101_01.json \
--result_path results --dataset ucf101 --model resnet \
--model_depth 18 --n_classes 101 --batch_size 128 --n_threads 4 --checkpoint 5
  • Continue Training from epoch 101. (results/save_100.pth is loaded.)
python main.py --root_path ~/data --video_path ~/UCF-101-JPEG --annotation_path ucf101_01.json \
--dataset ucf101 --resume_path results/save_100.pth \
--model_depth 18 --n_classes 101 --batch_size 128 --n_threads 4 --checkpoint 5
  • Calculate top-5 class probabilities of each video using a trained model (results/save_200.pth.)
    Note that inference_batch_size should be small because actual batch size is calculated by inference_batch_size * (n_video_frames / inference_stride).
python main.py --root_path ~/data --video_path ~/UCF-101-JPEG  --annotation_path ucf101_01.json \
--result_path results --dataset ucf101 --resume_path results/save_200.pth \
--model_depth 18 --n_classes 101 --n_threads 4 --no_train --no_val --inference --output_topk 5 --inference_batch_size 1
  • Perform Inference/validation by calculating top-1 video accuracy of a recognition result (/results/val.json). Note that this is the video level accuracy. For some datasets video level and clip level accuracies vary a lot.
python -m util_scripts.eval_accuracy ucf101_01.json /results/val.json --subset val -k 1 --ignore

Qualitative Analysis

During the training prototypes for each class are stored in the img folder at each push epoch.

  • Run python local_analysis.py to find closest prototypes to the test images at children level.

  • Run python local_analysis_parents.py to find closest prototypes to the test images at ancestor levels.

BibTeX

If you found this work useful in your research, please consider citing

@article{gulshad2023hierarchical,
  title={Hierarchical Explanations for Video Action Recognition},
  author={Gulshad, Sadaf and Long, Teng and van Noord, Nanne},
  journal={arXiv preprint arXiv:2301.00436},
  year={2023}
}

Acknowledgements

hipe's People

Contributors

sadafgulshad1 avatar

Stargazers

 avatar

Watchers

 avatar

Forkers

jory-joestar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.