Code Monkey home page Code Monkey logo

acmix's Introduction

ACmix

This repo contains the official PyTorch code and pre-trained models for ACmix.

Update

  • 2022.4.13 Update ResNet training code.

    Notice: Self-attention in ResNet is adopted following Stand-Alone Self-Attention in Vision Models, NeurIPS 2019. The sliding window pattern is extremely inefficient unless with carefully designed CUDA implementations. Therefore, it is highly recommended to use ACmix on SAN (with more efficient self-attention pattern) or Transformer-based models instead of vanilla ResNet.

Introduction

main

We explore a closer relationship between convolution and self-attention in the sense of sharing the same computation overhead (1×1 convolutions), and combining with the remaining lightweight aggregation operations.

Results

  • Top-1 accuracy on ImageNet v.s. Multiply-Adds

image-20211208195403247

Pretrained Models

Backbone Models Params FLOPs Top-1 Acc Links
ResNet-26 10.6M 2.3G 76.1 (+2.5) In process
ResNet-38 14.6M 2.9G 77.4 (+1.4) In process
ResNet-50 18.6M 3.6G 77.8 (+0.9) In process
SAN-10 12.1M 1.9G 77.6 (+0.5) In process
SAN-15 16.6M 2.7G 78.4 (+0.4) In process
SAN-19 21.2M 3.4G 78.7 (+0.5) In process
PVT-T 13M 2.0G 78.0 (+2.9) In process
PVT-S 25M 3.9G 81.7 (+1.9) In process
Swin-T 30M 4.6G 81.9 (+0.6) Tsinghua Cloud / Google Drive
Swin-S 51M 9.0G 83.5 (+0.5) Tsinghua Cloud / Google Drive

Get Started

Please go to the folder ResNet, Swin-Transformer for specific docs.

Contact

If you have any question, please feel free to contact the authors. Xuran Pan: [email protected].

Acknowledgment

Our code is based on SAN, PVT, and Swin Transformer.

Citation

If you find our work is useful in your research, please consider citing:

@misc{pan2021integration,
      title={On the Integration of Self-Attention and Convolution}, 
      author={Xuran Pan and Chunjiang Ge and Rui Lu and Shiji Song and Guanfu Chen and Zeyi Huang and Gao Huang},
      year={2021},
      eprint={2111.14556},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

acmix's People

Contributors

leaplabthu avatar panxuran avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

acmix's Issues

疑问

请问这个投影部分为什么用三个重复的1*1操作呀,卷积的过程没太看明白,可以帮我解释一下吗?谢谢

About your paper

Hi!

Could you please share which tool was used to create Figure 1 in your paper ?

Thank you,

Is it a typo mistake?

ResNet/test_bottleneck.py
line 101
original:
f_conv = f_all.permute(0, 2, 1, 3).reshape(x.shape[0], -1, x.shape[-1], x.shape[-1])
but I think it should be:
f_conv = f_all.permute(0, 2, 1, 3).reshape(x.shape[0], -1, x.shape[-2], x.shape[-1])
to maintain the shape of input height and width.
I do not know if it is correct. Looking forward to your reply. Thanks.

Pre-Trained model

你好,感谢你们杰出的工作,我想使用你们的ACmix模型来训练自己的数据,但是直接从0开始,需要大量的计算,但我在mindspore找不到对应的模型,请问能否提供一个在imagenet下训练好的模型?谢谢!

我使用的是configs/acmix_swin_tiny_patch4_window7_224.yaml

Based on ResNet

Hello, I would like to ask whether the code based on ResNet will be made public

为什么参数量并没有下降反而上升了好几倍??

我自己测试了一下用nn.Conv2d(16, 64, 1),输入大小是(1, 16, 224, 224),这个参数量只有1088,但是如果用ACmix得到的参数量是8604,这差了快8倍了,但是文章说 “同时与纯卷积或self-attention相比具有最小的计算开销”,好像没有体现,这是咋回事啊?

pre-train

When will you release the pre-train model of ResNet?

RuntimeError: Input type (float) and bias type (c10::Half) should be the same

使用YOLOv7结合ACmix,出现如下报错:

`Traceback (most recent call last):
  File "/home/liu/桌面/zwx/YOLOv7-main/train.py", line 613, in <module>
    train(hyp, opt, device, tb_writer)
  File "/home/liu/桌面/zwx/YOLOv7-main/train.py", line 415, in train
    results, maps, times = test.test(data_dict,
  File "/home/liu/桌面/zwx/YOLOv7-main/test.py", line 110, in test
    out, train_out = model(img, augment=augment)  # inference and training outputs
  File "/home/liu/anaconda3/envs/yolo-torch2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/liu/桌面/zwx/YOLOv7-main/models/yolo.py", line 320, in forward
    return self.forward_once(x, profile)  # single-scale inference, train
  File "/home/liu/桌面/zwx/YOLOv7-main/models/yolo.py", line 346, in forward_once
    x = m(x)  # run
  File "/home/liu/anaconda3/envs/yolo-torch2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/liu/桌面/zwx/YOLOv7-main/models/common.py", line 530, in forward
    pe = self.conv_p(position(h, w, x.is_cuda))
  File "/home/liu/anaconda3/envs/yolo-torch2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/liu/anaconda3/envs/yolo-torch2/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 463, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "/home/liu/anaconda3/envs/yolo-torch2/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (float) and bias type (c10::Half) should be the `same`

目标检测

请问作者可以放出关于目标检测实现的代码吗

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.