Code Monkey home page Code Monkey logo

avsegformer's Introduction

💬 AVSegFormer [paper]

The combination of vision and audio has long been a topic of interest among researchers in the multi-modal field. Recently, a new audio-visual segmentation task has been introduced, aiming to locate and segment the corresponding sound source objects in a given video. This task demands pixel-level fine-grained features for the first time, posing significant challenges. In this paper, we propose AVSegFormer, a new method for audio-visual segmentation tasks that leverages the Transformer architecture for its outstanding performance in multi-modal tasks. We combine audio features and learnable queries as decoder inputs to facilitate multi-modal information exchange. Furthermore, we design an audio-visual mixer to amplify the features of target objects. Additionally, we devise an intermediate mask loss to enhance training efficacy. Our method demonstrates robust performance and achieves state-of-the-art results in audio-visual segmentation tasks.

🚀 What's New

  • (2023.04.28) Upload pre-trained checkpoints and update README.
  • (2023.04.25) We completed the implemention of AVSegFormer and push the code.

🏠 Method

image

🛠️ Get Started

1. Environments

# recommended
pip install torch==1.10.0+cu111 torchvision==0.11.0+cu111 torchaudio==0.10.0 -f https://download.pytorch.org/whl/torch_stable.html
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu111/torch1.10.0/index.html
pip install pandas
pip install timm
pip install resampy
pip install soundfile
# build MSDeformAttention
cd ops
sh make.sh

2. Data

Please refer to the link AVSBenchmark to download the datasets. You can put the data under data folder or rename your own folder. Remember to modify the path in config files. The data directory is as bellow:

|--data
   |--AVSS
   |--Multi-sources
   |--Single-source

3. Download Pre-Trained Models

Method Backbone Subset Lr schd Config mIoU F-score Download
AVSegFormer-R50 ResNet-50 S4 30ep config 76.38 86.7 ckpt
AVSegFormer-PVTv2 PVTv2-B5 S4 30ep config 83.06 90.5 ckpt
AVSegFormer-R50 ResNet-50 MS3 60ep config 53.81 65.6 ckpt
AVSegFormer-PVTv2 PVTv2-B5 MS3 60ep config 61.33 73.0 ckpt
AVSegFormer-R50 ResNet-50 AVSS 30ep config 26.58 31.5 ckpt
AVSegFormer-PVTv2 PVTv2-B5 AVSS 30ep config 37.31 42.8 ckpt

4. Train

TASK = "s4"  # or ms3, avss
CONFIG = "config/s4/AVSegFormer_pvt2_s4.py"

bash train.sh ${TASK} ${CONFIG}

5. Test

TASK = "s4"  # or ms3, avss
CONFIG = "config/s4/AVSegFormer_pvt2_s4.py"
CHECKPOINT = "work_dir/AVSegFormer_pvt2_s4/S4_best.pth"

bash test.sh ${TASK} ${CONFIG} ${CHECKPOINT}

🤝 Citation

If you use our model, please consider cite following papers:

@article{zhou2023avss,
      title={Audio-Visual Segmentation with Semantics}, 
      author={Zhou, Jinxing and Shen, Xuyang and Wang, Jianyuan and Zhang, Jiayi and Sun, Weixuan and Zhang, Jing and Birchfield, Stan and Guo, Dan and Kong, Lingpeng and Wang, Meng and Zhong, Yiran},
      journal={arXiv preprint arXiv:2301.13190},
      year={2023},
}

@misc{gao2023avsegformer,
      title={AVSegFormer: Audio-Visual Segmentation with Transformer}, 
      author={Shengyi Gao and Zhe Chen and Guo Chen and Wenhai Wang and Tong Lu},
      year={2023},
      eprint={2307.01146},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

avsegformer's People

Contributors

vvvb-github avatar czczup avatar

Stargazers

 avatar Yuanhang Zhang avatar VictorLetzelter avatar  avatar Neeze avatar  avatar Jiangxiang Wang  avatar  avatar zx avatar Li zhipeng avatar Alex Wu avatar Conna avatar PeiwenSun avatar Alireza Hosseini avatar  avatar  avatar Sreyan Ghosh avatar 爱可可-爱生活 avatar  avatar  avatar JishengBai avatar Anas avatar  avatar ZXMu avatar 杨奇(yann qi) avatar  avatar An-zhi WANG avatar Jeff Carpenter avatar yahooo avatar yhzhouowo avatar cgnerds avatar  avatar Tabris avatar Pawel Cyrta avatar Dennis Fedorishin avatar Kinke Kabingila avatar Chun Chet Ng avatar Zhuojun Sun CV Student avatar Wenhai Wang avatar  avatar  avatar xmu-xiaoma666 avatar cgoe avatar Zuoou Li avatar Howard H. Tang avatar

Watchers

Anas avatar  avatar

avsegformer's Issues

einsum(): the number of subscripts in the equation (2) does not match the number of dimensions (1) for operand 1 and no ellipsis was given

File "/home/hwh/Project/AVS/AVSegFormer-master/model/head/AVSegHead.py", line 238, in forward
mask_feature = self.fusion_block(mask_feature, audio_feat)
File "/home/hwh/anaconda3/envs/AVS39/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/hwh/Project/AVS/AVSegFormer-master/model/utils/fusion_block.py", line 44, in forward
fusion_map = torch.einsum('bchw,bc->bchw', feature_map, x.squeeze())
File "/home/hwh/anaconda3/envs/AVS39/lib/python3.9/site-packages/torch/functional.py", line 378, in einsum
return _VF.einsum(equation, operands) # type: ignore[attr-defined]
RuntimeError: einsum(): the number of subscripts in the equation (2) does not match the number of dimensions (1) for operand 1 and no ellipsis was given

Question about the AVSS pre-training

When training the model on the AVSS Datasets, we find that the MIOU is about 20 with Res50 backbone and is about 30 with PVT-v2 backbone at 11 epochs. Could you please confirm if this is a normal occurrence? We have completed training for a total of 30 epochs, and in the subsequent 20 epochs, we observed an increase of approximately 6 points.

Problem shape '[1029, 320, 32]' is invalid for input of size 1317120

When training with the avss dataset, the audio_fea extracted by vggish is bs * 10 in the first dimension, which will not match the subsequent feature matrix with bs in the first dimension. The specific problem appears in "out2 = self.cross_attn (query, src, src, key_padding_mask = padding_mask) [0]",it showing this error:
File "/home/ptr/hzw/AVSegFormer-master/model/AVSegFormer.py", line 75, in forward
pred, mask_feature = self.head(img_feat, audio_feat)
File "/home/ptr/anaconda3/envs/AVS/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ptr/hzw/AVSegFormer-master/model/head/AVSegHead.py", line 223, in forward
memory, outputs = self.transformer(query, src_flatten, spatial_shapes,
File "/home/ptr/anaconda3/envs/AVS/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ptr/hzw/AVSegFormer-master/model/utils/transformer.py", line 160, in forward
outputs = self.decoder(query, memory, reference_points,
File "/home/ptr/anaconda3/envs/AVS/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ptr/hzw/AVSegFormer-master/model/utils/transformer.py", line 139, in forward
out = layer(out, src, reference_points, spatial_shapes,
File "/home/ptr/anaconda3/envs/AVS/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ptr/hzw/AVSegFormer-master/model/utils/transformer.py", line 117, in forward
out2 = self.cross_attn(
File "/home/ptr/anaconda3/envs/AVS/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ptr/anaconda3/envs/AVS/lib/python3.8/site-packages/torch/nn/modules/activation.py", line 1003, in forward
attn_output, attn_output_weights = F.multi_head_attention_forward(
File "/home/ptr/anaconda3/envs/AVS/lib/python3.8/site-packages/torch/nn/functional.py", line 5044, in multi_head_attention_forward
k = k.contiguous().view(k.shape[0], bsz * num_heads, head_dim).transpose(0, 1)
RuntimeError: shape '[1029, 320, 32]' is invalid for input of size 1317120

change the batch_size=1

When I set batchsize=1, the bug RuntimeError: Caught RuntimeError in replica 2 on device 2 will be displayed.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.