Code Monkey home page Code Monkey logo

ddm's Introduction

DDM-Net (CVPR 2022)

This repo holds the codes of paper: "Progressive Attention on Multi-Level Dense Difference Maps for Generic Event Boundary Detection", accepted in CVPR 2022.

News

[2022.5.8] The code is available now.
[2022.3.3] DDM-Net is accepted to CVPR 2022.
[2021.11.16] Our DDM-Net ranks 1st on the leaderboard of LOVEU@CVPR 2021, outperforming the top1 solution of LOVEU Challenge 2021.

Overview

This paper presents a modular framework for the task of generic event boundary detection (GEBD). To perceive diverse temporal variations and learn complex semantics of generic event boundaries, our method progressively attends to multi-level dense difference maps (DDM). Thanks to holistic temporal modeling and joint feature learning across modalities, our DDM-Net outperforms the previous state-of-the-art methods by a large margin on Kinetics-GEBD and TAPOS benchmark. In addition, our method is better than winner solutions of LOVEU Challenge@CVPR 2021, further demonstrating the efficacy of DDM-Net.

Dependencies

Python 3.7 or higher
PyTorch 1.6 or higher
einops
ipdb

Guide

Please refer to GUIDE for preparing input data and generating boundary predictions.

Performance

Dataset [email protected] [email protected] [email protected] Avg F1 checkpoint pickle
Kinetics-GEBD 76.43% 88.70% 90.16% 87.26% ckpt pkl

DDM-Net performance on Kinetics-GEBD

Training

Use tools/train.sh to train DDM-Net.

python DDM-Net/train.py \
--dataset kinetics_multiframes \
--train-split train \
--val-split val \
--num-classes 2 \
--batch-size 16 \
--n-sample-classes 2 \
--n-samples 16 \
--lr 0.00001 \
--warmup-epochs 0 \
--epochs 5 \
--decay-epochs 2 \
--model multiframes_resnet \
--pin-memory \
--sync-bn \
--amp \
--native-amp \
--distributed \
--eval-metric loss \
--log-interval 50 \
--port 16580 \
--eval-freq 1

Testing

Inference with tools/test.sh.

python DDM-Net/test.py \
--dataset kinetics_multiframes \
--val-split val \
-b 128 \
--resume checkpoint.pth.tar

Citation

If you find DDM-Net useful in your research, please cite us using the following entry:

@InProceedings{Tang_2022_CVPR,
    author    = {Tang, Jiaqi and Liu, Zhaoyang and Qian, Chen and Wu, Wayne and Wang, Limin},
    title     = {Progressive Attention on Multi-Level Dense Difference Maps for Generic Event Boundary Detection},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2022},
    pages     = {3355-3364}
}

Acknowledgement

We especially thank the contributors of the GEBD, RepNet, TSM and DETR for providing helpful code.

Thanks to Fengyuan Shi and Xun Jiang for their help.

Contact

Jiaqi Tang: [email protected]

ddm's People

Contributors

jackytown avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

ddm's Issues

How to apply this model to my own dataset

I can’t understand the" k400_train_raw_annotation.pkl and k400_val_raw_annotation.pkl",for I don't understand why it has included f1_consis.
If I want to use my own data. How can I prepare the data for train.

CSN

请问你们会开源CSN+DDM-Net模型吗?

大佬可以给一份训练的log文件吗

非常感谢大佬开源的这份工作,非常有价值,我在复现这个训练过程,但是目前发现loss貌似下降非常慢,目前训了2个epoch,loss只是从11左右下降到了10左右,这是正常的吗?我是在Kinetics-GEBD训练的

大佬可以给一份训练的log文件吗。

balanced sampler issue

When I try to use --balance-batch option, there is a problem.

In MultiFDataset
def _get_training_samples(self, index): indices = [] for class_ in self.labels_set: real_index = self.label_to_indices[class_][int(index * self.ratios[class_])] indices.append(real_index) return indices

at the part of (index * self.ratios[class_]), their range is over the self.label_to_indices itself.

where is zhe datasets?

Download videos listed in the Kinetics-GEBD annotation. Note that videos in the Kinetics-GEBD dataset are a subset of Kinetics-400 dataset. You can either download the whole Kinetics-400 dataset or just download the part in Kinetics-GEBD dataset.

Sorry, I only found some pkl files on the download page, but no datasets

corrupted videos

In the validation set, IjFrO11sQng.mp4 and SaJWnqViSLo.mp4 are corrupted videos. Can you provide these two videos? Thanks

Evaluation performance on GEBD validation set

Hi, this project is great and thanks for releasing the code!
I've re-trained DMM and the evaluation result on GEBD val set is as follows, which is around 2% lower than the reported result.

+GEBD Performance on Kinetics-GEBD----+--------+--------+--------+--------+--------+--------+--------+--------+
| Rel.Dis. | 0.05 | 0.10 | 0.15 | 0.20 | 0.25 | 0.30 | 0.35 | 0.40 | 0.45 | 0.50 | Avg |
+----------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| F1 | 0.7447 | 0.8252 | 0.8496 | 0.8615 | 0.8679 | 0.8722 | 0.8750 | 0.8774 | 0.8796 | 0.8817 | 0.8535 |
+----------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+

I've also tried loading the trained weights you've released and run the evaluation again, the result is still around 2% lower, which is,

+GEBD Performance on Kinetics-GEBD----+--------+--------+--------+--------+--------+--------+--------+--------+
| Rel.Dis. | 0.05 | 0.10 | 0.15 | 0.20 | 0.25 | 0.30 | 0.35 | 0.40 | 0.45 | 0.50 | Avg |
+----------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+
| F1 | 0.7462 | 0.8234 | 0.8462 | 0.8578 | 0.8642 | 0.8684 | 0.8715 | 0.8739 | 0.8758 | 0.8776 | 0.8505 |
+----------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+--------+.

I would really appreciate it if you could provide any insights on possible reasons of this. Thanks a lot!

The size of tensor a (10) must match the size of tensor b (11)

I used the video __NrybzYzUg_000415_000425.mp4 and followed guide.md to prepare data
Ran test.py with the Namespace(batch_size=128, data_dir='', dataset='kinetics_multiframes', model='multiframes_resnet', no_resume_opt=False, num_classes=2, pred_output='./multif-pred_outputs', rank=0, resume='../checkpoint.pth.tar', train_split='train', val_split='val')
Got the error and didn't know the reason
Traceback (most recent call last):
File "D:\DFL_BASE\DDM-main\DDM-Net\test.py", line 162, in
main()
File "D:\DFL_BASE\DDM-main\DDM-Net\test.py", line 115, in main
outps, _, _ = model(inps.cuda(non_blocking=True))
File "C:\ProgramData\Anaconda3\envs\DDM\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\ProgramData\Anaconda3\envs\DDM\lib\site-packages\torch\nn\parallel\data_parallel.py", line 159, in forward
return self.module(*inputs[0], **kwargs[0])
File "C:\ProgramData\Anaconda3\envs\DDM\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "D:\DFL_BASE\DDM-main\DDM-Net\modeling\resnetGEBD.py", line 670, in forward
intra_rgb_feat = self.intra_transformer1(x4, pos)[-1].permute(0, 2, 1)
File "C:\ProgramData\Anaconda3\envs\DDM\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "D:\DFL_BASE\DDM-main\DDM-Net\modeling\transformer.py", line 67, in forward
tgt, src, memory_key_padding_mask=None, pos=pos_embed, query_pos=query_embed
File "C:\ProgramData\Anaconda3\envs\DDM\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "D:\DFL_BASE\DDM-main\DDM-Net\modeling\transformer.py", line 123, in forward
query_pos=query_pos,
File "C:\ProgramData\Anaconda3\envs\DDM\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "D:\DFL_BASE\DDM-main\DDM-Net\modeling\transformer.py", line 300, in forward
query_pos,
File "D:\DFL_BASE\DDM-main\DDM-Net\modeling\transformer.py", line 222, in forward_post
key=self.with_pos_embed(memory, pos),
File "D:\DFL_BASE\DDM-main\DDM-Net\modeling\transformer.py", line 185, in with_pos_embed
return tensor if pos is None else tensor + pos
RuntimeError: The size of tensor a (10) must match the size of tensor b (11) at non-singleton dimension 0

Thanks

Maybe I found a bug in ./DDM-Net/modeling/position_embedding.py line: 34

The code here should be written like this. I am looking forward to you can proofread it.

    def forward(self, locations):
        result = (
            # self.position_table[: locations.shape[1]]
            self.position_table[:, :locations.shape[1], :]
            .clone()
            .detach()
            .repeat(locations.shape[0], 1, 1)
        )
        return result

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.