Code Monkey home page Code Monkey logo

modulated-deform-conv's Introduction

modulated-deform-conv

该项目是一个 Pytorch C++ and CUDA Extension,采用C++和Cuda实现了deformable-conv2d,modulated-deformable-conv2d,deformable-conv3d,modulated-deformable-conv3d的forward function和backward function,并在Python中对其进行了包装。
This Project is a Pytorch C++ and CUDA Extension, which implements the forward function and backward function for deformable-conv2d, modulated-deformable-conv2d, deformable-conv3d, modulated-deformable-conv3d, then encapsulates C++ and CUDA code into Python Package.

安装 Install

  • run pip install modulated-deform-conv
  • or git clone https://github.com/CHONSPQX/modulated-deform-conv.git,then cd modulated-deform-conv and run python setup.py install

要求 Requires

  • Python 3
  • Pytorch>=1.3
  • Linux, gcc版本>=4.9(For Linux, gcc version>=4.9)
  • Windows,CUDA版本需要VS版本兼容(For Windows, CUDA version must be compatiable with Visual Studio version)

由于资源有限,目前测试过的环境有(Because of limited resources, only the following environment are tested)

  • Ubuntu18.04 , gcc 7.4 , CUDA 10.2 ,Python3.7.4, Pytorch 1.3.1
  • Ubuntu18.04 , gcc 7.4 , CUDA 10.2 ,Python3.7.4, Pytorch 1.4.0
  • Ubuntu18.04 , gcc 7.5 , CUDA 11.1 ,Python3.6.12, Pytorch 1.7.0
  • Windows10 , Visual Studio 2017 , CUDA 10.1 ,Python3.7.6, Pytorch 1.4.0
  • Windows10 , Visual Studio 2019 , CUDA 11.1 ,Python3.6.12, Pytorch 1.7.0

速度优化 Speed Optimization

  • pip download modulated-deform-conv 解压得到的压缩文件,进入modulated-deform-conv,打开src/config.h,用户可根据自身显卡情况,设置以下两个变量,获得更快运行速度,然后运行 python setup.py install
    Unzip the downloaded compressed file, cd modulated-deform-conv, then open src/config.h,users are recommended to set the following VARIABLES to optimize run speed according to their NVIDIA GPU condition, then run python setup.py install

    • const int CUDA_NUM_THREADS
    • const int MAX_GRID_NUM
  • 运行时可以通过传递in_step参数来优化速度,该变量控制每次并行处理的batch 大小。
    Or users can set different in_step value in run time, which controls the batch size of each parallel processing .

使用 Use

直接使用C++函数,请import MDCONV_CUDA 使用封装后的python类,请import modulated_deform_conv Using C++ functions directly, please import MDCONV_CUDA Using the packaged function by Python, please import modulated_deform_conv

文档 Documents

1.C++ and CUDA Code

  • 文件 Files
Filename Content
config.h macro&gloabl variables&inline functions
deformable_conv.cu MDCONV_CUDA.deform_conv2d_forward_cuda MDCONV_CUDA.deform_conv2d_backward_cuda
mdeformable_conv.cu MDCONV_CUDA.modulated_deform_conv2d_forward_cuda MDCONV_CUDA.modulated_deform_conv2d_backward_cuda
deformable_conv3d.cu MDCONV_CUDA.deform_conv3d_forward_cuda MDCONV_CUDA.deform_conv3d_backward_cuda
mdeformable_conv3d.cu MDCONV_CUDA.modulated_deform_conv3d_forward_cuda MDCONV_CUDA.modulated_deform_conv2d_backward_cuda
utils.cu some code for display debug outputs
warp.cpp glue code between C++ and Python
  • 变量 Variables
Variable Name Type Introduction
kernel_h const int first dimension size of the convolution kernel
kernel_w const int second dimension size of the convolution kernel
kernel_l const int third dimension size of the convolution kernel
stride_h const int stride for first dimension
stride_w const int stride for second dimension
stride_l const int stride for third dimension
pad_h const int zero padding for first dimension
pad_w const int zero padding for second dimension
pad_l const int zero padding for third dimension
dilation_h const int dilation rate for first dimension
dilation_w const int dilation rate for second dimension
dilation_l const int dilation rate for third dimension
group const int group of convolution
deformable_group const int group of offset and mask
in_step const int batch size of each parallel processing
with_bias const bool if have bias
input at::Tensor B,I,H,W[,L],I must be divisible bygroup and deformable_group
grad_input at::Tensor grad_input must be size like input
weight at::Tensor O,I/group,H,W[,L]Omust be divisible bygroup
grad_weight at::Tensor grad_weight must be size like weight
bias at::Tensor [O], if with_bias=true, bias must be non-null
grad_bias at::Tensor grad_bias must be size like bias
offset at::Tensor B,deformable_group*2*kernel_h*kernel_w,H,W B,deformable_group*3*kernel_h*kernel_w*kernel_l,H,W,L
grad_offset at::Tensor grad_offset must be size like offset
mask at::Tensor B,deformable_group*kernel_h*kernel_w,H,W B,deformable_group*kernel_h*kernel_w*kernel_l,H,W,L
grad_mask at::Tensor grad_mask must be size like mask
output at::Tensor B,O,OH,OW[,OL]
grad_output at::Tensor grad_output must be size like output

2.Python Code

Class Name Type
class DeformConv2dFunction torch.autograd.Function
class ModulatedDeformConv2dFunction torch.autograd.Function
class DeformConv3dFunction torch.autograd.Function
class ModulatedDeformConv3dFunction torch.autograd.Function
class DeformConv2d torch.nn.Module
class ModulatedDeformConv2d torch.nn.Module
class DeformConv3d torch.nn.Module
class ModulatedDeformConv3d torch.nn.Module

Author

Xin Qiao [email protected]

License

Copyright (c) 2020 Xin Qiao Released under the MIT license

modulated-deform-conv's People

Contributors

chonspqx avatar hubertque avatar wmpscc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

modulated-deform-conv's Issues

IndexError: tuple index out of range

ile "/home/sysgen/anaconda3/envs/pytorch/lib/python3.7/site-packages/modulated_deform_conv-1.0.2-py3.7-linux-x86_64.egg/modulated_deform_conv.py", line 536, in forward
self.groups, self.deformable_groups,self.in_step)
File "/home/sysgen/anaconda3/envs/pytorch/lib/python3.7/site-packages/modulated_deform_conv-1.0.2-py3.7-linux-x86_64.egg/modulated_deform_conv.py", line 279, in forward
output = input.new_empty(ModulatedDeformConv3dFunction._infer_shape(ctx, input, weight))
File "/home/sysgen/anaconda3/envs/pytorch/lib/python3.7/site-packages/modulated_deform_conv-1.0.2-py3.7-linux-x86_64.egg/modulated_deform_conv.py", line 345, in _infer_shape
length_out = (length + 2 * ctx.padding[2] - (ctx.dilation[2] * (kernel_l - 1) + 1)) // ctx.stride[2] + 1

IndexError: tuple index out of range
Exception in thread Thread-4:
Traceback (most recent call last):

Unsupported gpu architecture 'compute_86'

Thanks for your great job. Our environment is linux with gpu RTX3090 and the fixed lowest version of pytorch is 1.7 with cuda11.0. When we run mask.sh , it prompts nvcc fatal : Unsupported gpu architecture 'compute_86' . Is there any setting unsuitable?

speed compared to 3dconv

does anyone ever compared the speed of this deformed 3d conv with vanilla 3dconv in pytorch? Is there huge(10x) difference?

3D deformable ROI / 3D modulated RoI pooling

Hi
These layers are very helpful.
But, what about the pooling layers? I did not find the 3d conversion of Deformable RoI pooling and modulated RoI pooling. If you have the 3d version of these layers please upload them too.
Thanks

test in Windows

win10 cuda 11.3 and I install in conda environment(pytorch 1.10 cuda11.3 and pytorch1.81 cuda 11.2)
I can install this project into my pc but when i use my_test.py ,it go wrong

line 28, in forward MDCONV_CUDA.deform_conv2d_forward_cuda(
AttributeError: module 'MDCONV_CUDA' has no attribute 'deform_conv2d_forward_cuda'

TypeError: a bytes-like object is required, not 'str'

(pt) D:\detect\modulated-deform-conv> python setup.py install
running install
running bdist_egg
running egg_info
writing modulated_deform_conv.egg-info\PKG-INFO
Traceback (most recent call last):
File "setup.py", line 66, in
cmdclass={'build_ext': BuildExtension}, zip_safe=False)
File "C:\Users\84791.conda\envs\pt\lib\site-packages\setuptools_init_.py", line 144, in setup
return distutils.core.setup(**attrs)
File "C:\Users\84791.conda\envs\pt\lib\distutils\core.py", line 148, in setup
dist.run_commands()
File "C:\Users\84791.conda\envs\pt\lib\distutils\dist.py", line 955, in run_commands
self.run_command(cmd)
File "C:\Users\84791.conda\envs\pt\lib\distutils\dist.py", line 974, in run_command
cmd_obj.run()
File "C:\Users\84791.conda\envs\pt\lib\site-packages\setuptools\command\install.py", line 67, in run
self.do_egg_install()
File "C:\Users\84791.conda\envs\pt\lib\site-packages\setuptools\command\install.py", line 109, in do_egg_install
self.run_command('bdist_egg')
File "C:\Users\84791.conda\envs\pt\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "C:\Users\84791.conda\envs\pt\lib\distutils\dist.py", line 974, in run_command
cmd_obj.run()
File "C:\Users\84791.conda\envs\pt\lib\site-packages\setuptools\command\bdist_egg.py", line 163, in run
self.run_command("egg_info")
File "C:\Users\84791.conda\envs\pt\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "C:\Users\84791.conda\envs\pt\lib\distutils\dist.py", line 974, in run_command
cmd_obj.run()
File "C:\Users\84791.conda\envs\pt\lib\site-packages\setuptools\command\egg_info.py", line 290, in run
writer(self, ep.name, os.path.join(self.egg_info, ep.name))
File "C:\Users\84791.conda\envs\pt\lib\site-packages\setuptools\command\egg_info.py", line 622, in write_pkg_info
metadata.write_pkg_info(cmd.egg_info)
File "C:\Users\84791.conda\envs\pt\lib\distutils\dist.py", line 1106, in write_pkg_info
self.write_pkg_file(pkg_info)
File "C:\Users\84791.conda\envs\pt\lib\site-packages\setuptools\dist.py", line 167, in write_pkg_file
long_desc = rfc822_escape(self.get_long_description())
File "C:\Users\84791.conda\envs\pt\lib\distutils\util.py", line 474, in rfc822_escape
lines = header.split('\n')
TypeError: a bytes-like object is required, not 'str'

Unsupported gpu architecture 'compute_86'

Thanks for your great job. Our environment is linux with gpu RTX3090 and the fixed lowest version of pytorch is 1.7 with cuda11.0. When we run python setup.py install , it prompts nvcc fatal : Unsupported gpu architecture 'compute_86' . Is there any setting unsuitable?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.