Code Monkey home page Code Monkey logo

stdf-pytorch's Introduction

Spatio-Temporal Deformable Convolution for Compressed Video Quality Enhancement (AAAI 2020)

🚀 Note: We implement STDF based on MMEditing at PowerQE.

0. Background

PyTorch implementation of Spatio-Temporal Deformable Convolution for Compressed Video Quality Enhancement (AAAI 2020).

  • A simple and effective video quality enhancement network.
  • Adopt feature alignment by multi-frame deformable convolutions, instead of motion estimation and motion compensation.

Notice: The dataset and training method are different from those in the original paper.

network

(Figure copyright: Jianing Deng)

Feel free to contact: [email protected].

1. Pre-request

1.1. Environment

conda create -n stdf python=3.7 -y && conda activate stdf

git clone --depth=1 https://github.com/ryanxingql/stdf-pytorch && cd stdf-pytorch/

# given CUDA 10.1
python -m pip install torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html

python -m pip install tqdm lmdb pyyaml opencv-python scikit-image

1.2. DCNv2

Build DCNv2

cd ops/dcn/
bash build.sh

Check if DCNv2 works (optional)

python simple_check.py

The DCNv2 source files here is different from the open-sourced version due to incompatibility. [issue]

1.3. MFQEv2 dataset

Download and compress videos

Please check here.

Edit YML

We now edit option_R3_mfqev2_4G.yml.

Suppose the folder MFQEv2_dataset/ is placed at /raid/xql/datasets/MFQEv2_dataset/, then you should assign /raid/xql/datasets/MFQEv2_dataset/ to dataset -> train -> root in YAML.

R3: one of the network structures provided in the paper; mfqev2: MFQEv2 dataset will be adopted; 4G: 4 GPUs will be used for the below training. Similarly, you can also edit option_R3_mfqev2_1G.yml and option_R3_mfqev2_2G.yml if needed.

Generate LMDB

We now generate LMDB to speed up IO during training.

python create_lmdb_mfqev2.py --opt_path option_R3_mfqev2_4G.yml

Now you will get all needed data:

MFQEv2_dataset/
├── train_108/
│   ├── raw/
│   └── HM16.5_LDP/
│       └── QP37/
├── test_18/
│   ├── raw/
│   └── HM16.5_LDP/
│       └── QP37/
├── mfqev2_train_gt.lmdb/
└── mfqev2_train_lq.lmdb/

Finally, the MFQEv2 dataset root will be sym-linked to the folder ./data/ automatically.

So that we and programmes can access MFQEv2 dataset at ./data/ directly.

2. Train

See script.sh.

3. Test

Pretrained models can be found here: [Releases] and [百度网盘 (stdf)]

3.1. Test MFQEv2 dataset after training

See script.sh.

3.2. Test MFQEv2 dataset without training

If you did not run create_lmdb for training, you should first sym-link MFQEv2 dataset to ./data/.

mkdir data/
ln -s /your/path/to/MFQEv2_dataset/ data/MFQEv2

Download the pre-trained model, and see script.sh.

3.3. Test your own video

First download the pre-trained model, and then run:

CUDA_VISIBLE_DEVICES=0 python test_one_video.py

See test_one_video.py for more details.

4. Result

loading model exp/MFQEv2_R3_enlarge300x/ckp_290000.pt...
> model exp/MFQEv2_R3_enlarge300x/ckp_290000.pt loaded.

<<<<<<<<<< Results >>>>>>>>>>
BQMall_832x480_600.yuv: [31.297] dB -> [32.221] dB
BQSquare_416x240_600.yuv: [28.270] dB -> [29.078] dB
BQTerrace_1920x1080_600.yuv: [31.247] dB -> [31.852] dB
BasketballDrill_832x480_500.yuv: [31.591] dB -> [32.359] dB
BasketballDrive_1920x1080_500.yuv: [33.227] dB -> [33.963] dB
BasketballPass_416x240_500.yuv: [30.482] dB -> [31.446] dB
BlowingBubbles_416x240_500.yuv: [27.794] dB -> [28.465] dB
Cactus_1920x1080_500.yuv: [32.207] dB -> [32.918] dB
FourPeople_1280x720_600.yuv: [34.589] dB -> [35.533] dB
Johnny_1280x720_600.yuv: [36.375] dB -> [37.161] dB
Kimono_1920x1080_240.yuv: [34.411] dB -> [35.272] dB
KristenAndSara_1280x720_600.yuv: [35.887] dB -> [36.895] dB
ParkScene_1920x1080_240.yuv: [31.583] dB -> [32.140] dB
PartyScene_832x480_500.yuv: [27.802] dB -> [28.402] dB
PeopleOnStreet_2560x1600_150.yuv: [31.388] dB -> [32.557] dB
RaceHorses_416x240_300.yuv: [29.320] dB -> [30.055] dB
RaceHorses_832x480_300.yuv: [30.094] dB -> [30.557] dB
Traffic_2560x1600_150.yuv: [33.176] dB -> [33.866] dB
> ori: [31.708] dB
> ave: [32.486] dB
> delta: [0.778] dB
TOTAL TIME: [0.2] h

5. Q&A

5.1. Vimeo-90K dataset

You should download the Vimeo-90K dataset, convert these PNG sequences into 7-frame YCbCr YUV444P videos, then compress these videos under QP37, All Intra, HM16.5.

We also provide one-click programme at [Releases] and [百度网盘 (stdf)].

Vimeo-90K/
├── vimeo_septuplet/
│   └── ...
├── vimeo_septuplet_ycbcr/
│   └── ...
└── vimeo_septuplet_ycbcr_intra/
    └── ...

The LMDB preparation, option YAML, training and test codes have been already provided in this repository.

5.2. Why the epoch index starts from 0, while the iter index (also model index) starts from 1

Small bug. I may fix it some time.

5.3. How do we enlarge the dataset

Following BasicSR, we set sampling index = target index % dataset len.

For example, if we have a dataset which volume is 4 and enlargement ratio is 2, then we will sample images at indexes equal 0, 1, 2, 3, 0, 1, 2, 3. Note that at each sampling, we will randomly crop the image. Therefore, the patches cropped at the same image but different times can be different.

Besides, the data loader will be shuffled at the start of each epoch. Enlarging epoch can help reduce the total starting times.

5.4. Why do we set the number of iteration, not epoch

Considering that we can enlarge the dataset with various ratio, the number of epoch is meaningless. In the meanwhile, the number of iteration indicates the number of sampling batches, which is more meaningful to us.

6. License

We adopt Apache License v2.0. For other licenses, please refer to BasicSR and DCNv2.

If you find this repository helpful, you may cite:

@inproceedings{2020deng_stdf,
  title={Spatio-Temporal Deformable Convolution for Compressed Video Quality Enhancement},
  author={Deng, Jianing and Wang, Li and Pu, Shiliang and Zhuo, Cheng},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={34},
  number={07},
  pages={10696--10703},
  year={2020}
}

@software{2020xing_stdf,
  author = {Xing, Qunliang and Deng, Jianing},
  month = {9},
  title = {{PyTorch implementation of STDF}},
  url = {https://github.com/ryanxingql/stdf-pytorch},
  version = {1.0.0},
  year = {2020}
}

Special thanks to Jianing Deng (邓家宁, the author of STDF) for network structure and training details.

stdf-pytorch's People

Contributors

ryanxingql avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

stdf-pytorch's Issues

about vimeo 90k dataset

why use all-intra mode to compress data? In this case, all frames have same quality, is it useful to use neighbor frames' information?

Pytorch version of the pretrained model

I downloaded the pretrained models in Baidu netdisk, but when I try to test a yuv video, I get the following error msg;

~/Downloads/STDF-PyTorch$ CUDA_VISIBLE_DEVICES=0 /home/djn/anaconda3-cuda10/bin/python test_one_video.py
loading model STDF_data/exp/MFQEv2_R3_enlarge300x/ckp_290000.pt...
Traceback (most recent call last):
File "test_one_video.py", line 124, in
main()
File "test_one_video.py", line 44, in main
checkpoint = torch.load(ckp_path)
File "/home/djn/anaconda3-cuda10/lib/python3.7/site-packages/torch/serialization.py", line 527, in load
with _open_zipfile_reader(f) as opened_zipfile:
File "/home/djn/anaconda3-cuda10/lib/python3.7/site-packages/torch/serialization.py", line 224, in init
super(_open_zipfile_reader, self).init(torch.C.PyTorchFileReader(name_or_buffer))
RuntimeError: version
<= kMaxSupportedFileFormatVersion INTERNAL ASSERT FAILED at /pytorch/caffe2/serialize/inline_container.cc:132, please report a bug to PyTorch. Attempted to read a PyTorch file with version 3, but the maximum supported version for reading is 2. Your PyTorch installation may be too old. (init at /pytorch/caffe2/serialize/inline_container.cc:132)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x33 (0x7fe6bd239193 in /home/djn/anaconda3-cuda10/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #1: caffe2::serialize::PyTorchStreamReader::init() + 0x1f5b (0x7fe6c03c19eb in /home/djn/anaconda3-cuda10/lib/python3.7/site-packages/torch/lib/libtorch.so)
frame #2: caffe2::serialize::PyTorchStreamReader::PyTorchStreamReader(std::string const&) + 0x64 (0x7fe6c03c2c04 in /home/djn/anaconda3-cuda10/lib/python3.7/site-packages/torch/lib/libtorch.so)
frame #3: + 0x6c6536 (0x7fe7086b5536 in /home/djn/anaconda3-cuda10/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #4: + 0x295a74 (0x7fe708284a74 in /home/djn/anaconda3-cuda10/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #5: _PyMethodDef_RawFastCallDict + 0x24d (0x55790a81763d in /home/djn/anaconda3-cuda10/bin/python)
frame #6: _PyCFunction_FastCallDict + 0x21 (0x55790a8177c1 in /home/djn/anaconda3-cuda10/bin/python)
frame #7: _PyObject_Call_Prepend + 0x63 (0x55790a815e53 in /home/djn/anaconda3-cuda10/bin/python)
frame #8: PyObject_Call + 0x6e (0x55790a808dbe in /home/djn/anaconda3-cuda10/bin/python)
frame #9: + 0x9e214 (0x55790a780214 in /home/djn/anaconda3-cuda10/bin/python)
frame #10: _PyObject_FastCallKeywords + 0x128 (0x55790a84e588 in /home/djn/anaconda3-cuda10/bin/python)
frame #11: _PyEval_EvalFrameDefault + 0x52f8 (0x55790a8b26e8 in /home/djn/anaconda3-cuda10/bin/python)
frame #12: _PyEval_EvalCodeWithName + 0x5da (0x55790a7f681a in /home/djn/anaconda3-cuda10/bin/python)
frame #13: _PyFunction_FastCallDict + 0x1d5 (0x55790a7f7635 in /home/djn/anaconda3-cuda10/bin/python)
frame #14: _PyObject_Call_Prepend + 0x63 (0x55790a815e53 in /home/djn/anaconda3-cuda10/bin/python)
frame #15: + 0x16b97a (0x55790a84d97a in /home/djn/anaconda3-cuda10/bin/python)
frame #16: _PyObject_FastCallKeywords + 0x128 (0x55790a84e588 in /home/djn/anaconda3-cuda10/bin/python)
frame #17: _PyEval_EvalFrameDefault + 0x4a96 (0x55790a8b1e86 in /home/djn/anaconda3-cuda10/bin/python)
frame #18: _PyEval_EvalCodeWithName + 0x2f9 (0x55790a7f6539 in /home/djn/anaconda3-cuda10/bin/python)
frame #19: _PyFunction_FastCallKeywords + 0x387 (0x55790a845f57 in /home/djn/anaconda3-cuda10/bin/python)
frame #20: _PyEval_EvalFrameDefault + 0x4b39 (0x55790a8b1f29 in /home/djn/anaconda3-cuda10/bin/python)
frame #21: _PyFunction_FastCallKeywords + 0xfb (0x55790a845ccb in /home/djn/anaconda3-cuda10/bin/python)
frame #22: _PyEval_EvalFrameDefault + 0x416 (0x55790a8ad806 in /home/djn/anaconda3-cuda10/bin/python)
frame #23: _PyEval_EvalCodeWithName + 0x2f9 (0x55790a7f6539 in /home/djn/anaconda3-cuda10/bin/python)
frame #24: PyEval_EvalCodeEx + 0x44 (0x55790a7f7424 in /home/djn/anaconda3-cuda10/bin/python)
frame #25: PyEval_EvalCode + 0x1c (0x55790a7f744c in /home/djn/anaconda3-cuda10/bin/python)
frame #26: + 0x22ab74 (0x55790a90cb74 in /home/djn/anaconda3-cuda10/bin/python)
frame #27: PyRun_FileExFlags + 0xa1 (0x55790a916eb1 in /home/djn/anaconda3-cuda10/bin/python)
frame #28: PyRun_SimpleFileExFlags + 0x1c3 (0x55790a9170a3 in /home/djn/anaconda3-cuda10/bin/python)
frame #29: + 0x236195 (0x55790a918195 in /home/djn/anaconda3-cuda10/bin/python)
frame #30: _Py_UnixMain + 0x3c (0x55790a9182bc in /home/djn/anaconda3-cuda10/bin/python)
frame #31: __libc_start_main + 0xf0 (0x7fe717c99840 in /lib/x86_64-linux-gnu/libc.so.6)
frame #32: + 0x1db062 (0x55790a8bd062 in /home/djn/anaconda3-cuda10/bin/python)

It seems that the Pytorch version of the pretrained model is newer than my Pytorch version (which is 1.6.0+cu101, according to the environment pre-request).

Can you offer some suggestions or solutions to this problem? Thanks a lot.

Question about the data format

Hi, thank you very much for your reply in the afternoon, I'm sorry to trouble you again.
I want to use optical flow in VQE, however, the input of the optical flow network has RGB channel instead of Y channel, so I think that I should make yuv channel lmdb instead of y lmdb, and then convert it to rgb channel before inputted to the optical flow network. So the only change of the origin code is set 'only_y=False' in the import_yuv fuction, is it right?
BTW, there are several algorithm to convert data from yuv420 to rgb. Which one do you think is better?

1.R = Y + 1.402 * (V - 128)
G = Y - 0.34413 * (U - 128) - 0.71414 * (V - 128)
B = Y + 1.772 * (U - 128)

2.out_img = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621],
[0.00791071, -0.00153632, 0],
[0, -0.00318811, 0.00625893]]) * 255.0 + [-276.836, 135.576, -222.921]
The second one is from BasicSR~

WARNING: --SEIDecodedPictureHash is now disabled by default

when I run
python unzip_n_compress.py in centos sys.

14/18: compressing BasketballDrill_832x480_500...

15/18: compressing BQTerrace_1920x1080_600...

16/18: compressing FourPeople_1280x720_600...

17/18: compressing PeopleOnStreet_2560x1600_150...

18/18: compressing BlowingBubbles_416x240_500...
******************************************************************
** WARNING: --SEIDecodedPictureHash is now disabled by default. **
**          Automatic verification of decoded pictures by a     **
**          decoder requires this option to be enabled.         **
******************************************************************
******************************************************************
** WARNING: --SEIDecodedPictureHash is now disabled by default. **
**          Automatic verification of decoded pictures by a     **
**          decoder requires this option to be enabled.         **
******************************************************************
******************************************************************
** WARNING: --SEIDecodedPictureHash is now disabled by default. **
**          Automatic verification of decoded pictures by a     **

Is it right?

docker environment

I'm trying to build the dcnv2 ops but always failed, could you give me a docker image file?

question about create_lmdb

Hi, thank you for sharing the project, I want to know why when creating LMDB, ground truth uses only center frames of each sequence, while LQ uses all the frames.

a question about the test video format

Sorry to bother you, I have a question about the test video format. This project is to enhance the y channel, but how to convert the enhanced video to form rgb

When generating data using the code in Baidu network disk, a warning appears

Thank you for sharing the project, when I use the code in Baidu Netdisk to generate data, it appears:
generating cfg...
18 videos found.

done.
1/18: compressing BQMall_832x480_600...
2/18: compressing BQSquare_416x240_600...
3/18: compressing BQTerrace_1920x1080_600...
4/18: compressing BasketballDrill_832x480_500...
5/18: compressing BasketballDrive_1920x1080_500...
6/18: compressing BasketballPass_416x240_500...
7/18: compressing BlowingBubbles_416x240_500...
8/18: compressing Cactus_1920x1080_500...
9/18: compressing FourPeople_1280x720_600...
10/18: compressing Johnny_1280x720_600...
11/18: compressing Kimono_1920x1080_240...
12/18: compressing KristenAndSara_1280x720_600...
13/18: compressing ParkScene_1920x1080_240...
14/18: compressing PartyScene_832x480_500...
15/18: compressing PeopleOnStreet_2560x1600_150...
16/18: compressing RaceHorses_416x240_300...
17/18: compressing RaceHorses_832x480_300...
18/18: compressing Traffic_2560x1600_150...


** WARNING: --SEIDecodedPictureHash is now disabled by default. **
** Automatic verification of decoded pictures by a **
** decoder requires this option to be enabled. **



** WARNING: --SEIDecodedPictureHash is now disabled by default. **
** Automatic verification of decoded pictures by a **
** decoder requires this option to be enabled. **



** WARNING: --SEIDecodedPictureHash is now disabled by default. **
** Automatic verification of decoded pictures by a **
** decoder requires this option to be enabled. **



** WARNING: --SEIDecodedPictureHash is now disabled by default. **
** Automatic verification of decoded pictures by a **
** decoder requires this option to be enabled. **



** WARNING: --SEIDecodedPictureHash is now disabled by default. **
** Automatic verification of decoded pictures by a **
** decoder requires this option to be enabled. **



** WARNING: --SEIDecodedPictureHash is now disabled by default. **
** Automatic verification of decoded pictures by a **



** decoder requires this option to be enabled. **
** WARNING: --SEIDecodedPictureHash is now disabled by default. **
** WARNING: --SEIDecodedPictureHash is now disabled by default. **
** Automatic verification of decoded pictures by a **


** decoder requires this option to be enabled. **
** Automatic verification of decoded pictures by a **


** decoder requires this option to be enabled. **


How can I solve this? Does it have any impact on data usage?

Dropbox dataset not available

Dataset in dropbox is not available. However, I am not sure how to download the baidu pan dataset directly to my server.

关于deform_conv的一些问题

您好,我遇到了一些关于可变性卷积的问题,想向您请教一下。
我注意到您在net_stdf.py中应用deform_conv中的输入是Y通道也就是1通道的信息。
这种情况下如果输入是RGB的图像的话,请问需要修改哪些参数呢?
期待您的回复,非常感谢!

关于保存结果的一些相关咨询

您好,我在使用您发布的预训练模型进行测试时,发现仅有Y通道的结果与指标。
请问这种情况下,想保留png格式的结果图应该如何操作与修改呢?
希望可以得到您的帮助,万分感谢。

a question of cuda out of memory in test one video

您好,当我使用一个视频进行测试时,我用416240与832480的yuv序列进行测试的时候没有问题,但使用更高分辨率的yuv序列进行测试的时候就会爆显存,我有看到训练的时候会裁剪成128*128的,但是对一个视频进行测试的时候为了更好的显示增强结果就没有进行图片裁剪,想问问您是怎么解决这个问题的呢

CUDA out of memory

Hello

I'm trying to train MFQEv2_R3_enlarge300x but it fails near iter 5000 with the following error :

RuntimeError: CUDA out of memory. Tried to allocate 1000.00 MiB (GPU 0; 7.79 GiB total capacity; 5.21 GiB already allocated; 768.88 MiB free; 5.95 GiB reserved in total by PyTorch)

I have a MSI RTX 2080 VENTUS V2 with 8Go of RAM

Is there an option to limit the amount of used memory ?

Problem with dataloader

Traceback (most recent call last): File "train.py", line 540, in <module> main() File "train.py", line 513, in main train_data = tra_prefetcher.next() File "/home/mkhan/generic_loss/STDF-PyTorch/utils/file_io.py", line 271, in next return next(self.loader) File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__ data = self._next_data() File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data return self._process_data(data) File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data data.reraise() File "/opt/conda/lib/python3.8/site-packages/torch/_utils.py", line 434, in reraise raise exception numpy.AxisError: Caught AxisError in DataLoader worker process 0. Original Traceback (most recent call last): File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/mkhan/generic_loss/STDF-PyTorch/dataset/mfqev2.py", line 102, in __getitem__ img_lq = _bytes2img(img_bytes) # (H W 1) File "/home/mkhan/generic_loss/STDF-PyTorch/dataset/mfqev2.py", line 13, in _bytes2img img = np.expand_dims(cv2.imdecode(img_np, cv2.IMREAD_GRAYSCALE), 2) # (H W 1) File "<__array_function__ internals>", line 5, in expand_dims File "/opt/conda/lib/python3.8/site-packages/numpy/lib/shape_base.py", line 597, in expand_dims axis = normalize_axis_tuple(axis, out_ndim) File "/opt/conda/lib/python3.8/site-packages/numpy/core/numeric.py", line 1385, in normalize_axis_tuple axis = tuple([normalize_axis_index(ax, ndim, argname) for ax in axis]) File "/opt/conda/lib/python3.8/site-packages/numpy/core/numeric.py", line 1385, in <listcomp> axis = tuple([normalize_axis_index(ax, ndim, argname) for ax in axis]) numpy.AxisError: axis 2 is out of bounds for array of dimension 1

Question about training

Hi, I have met some problems during training.

  1. As metioned in the last issue, I have made YUV lmdb instead of Y lmdb. However, when I use it as training set, it takes much longer time than origin method, about 2 times. I think it may be caused by the cv2.imdecode() function. I fould that when making lmdb, we use the cv2.imencode() with compression level as 1, therefore, we need to decode while reading data from lmdb. I would like to skip the cv2.imencode() while making lmdb, that is, save the data into lmdb without compression, in this way I don't need to use cv2.imdecode() function during training, which may save some time. I wonder that if this will make influence on the final result?
  2. BTY, you have mentioned in the Q&A that you enlarge the dataset by 'set sampling index = target index % dataset len'. I don't understand it, could you please tell me where it is in the code?

flops of deformable convolution

thank you for your release project!
Is there any reference document or code for the flops calculation method of deformable convolution?

help

I can't install dcnv2,my version is cuda10.1 cudnn 7.6.5 gcc 7.5.0 python 3.7.6 pytorch 1.6.0
the error is:
FAILED: /home/zwhua/STDF-PyTorch/ops/dcn/build/temp.linux-x86_64-3.7/src/deform_conv_cuda_kernel.o
/usr/local/cuda-10.1/bin/nvcc -I/home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/include -I/home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/include/TH -I/home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-10.1/include -I/home/zwhua/anaconda3/envs/stdf/include/python3.7m -c -c /home/zwhua/STDF-PyTorch/ops/dcn/src/deform_conv_cuda_kernel.cu -o /home/zwhua/STDF-PyTorch/ops/dcn/build/temp.linux-x86_64-3.7/src/deform_conv_cuda_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=deform_conv_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_61,code=sm_61 -std=c++14
/home/zwhua/STDF-PyTorch/ops/dcn/src/deform_conv_cuda_kernel.cu: In lambda function:
/home/zwhua/STDF-PyTorch/ops/dcn/src/deform_conv_cuda_kernel.cu:258:45: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
AT_DISPATCH_FLOATING_TYPES_AND_HALF(
^
/home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:268:1: note: declared here
DeprecatedTypeProperties & type() const {
^ ~~
/home/zwhua/STDF-PyTorch/ops/dcn/src/deform_conv_cuda_kernel.cu:258:100: warning: ‘c10::ScalarType detail::scalar_type(const at::DeprecatedTypeProperties&)’ is deprecated: passing at::DeprecatedTypeProperties to an AT_DISPATCH macro is deprecated, pass an at::ScalarType instead [-Wdeprecated-declarations]
AT_DISPATCH_FLOATING_TYPES_AND_HALF(
^
/home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/include/ATen/Dispatch.h:66:1: note: declared here
inline at::ScalarType scalar_type(const at::DeprecatedTypeProperties& t) {
^~~~~~~~~~~
/home/zwhua/STDF-PyTorch/ops/dcn/src/deform_conv_cuda_kernel.cu: In lambda function:
/home/zwhua/STDF-PyTorch/ops/dcn/src/deform_conv_cuda_kernel.cu:352:46: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
AT_DISPATCH_FLOATING_TYPES_AND_HALF(
^
/home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:268:1: note: declared here
DeprecatedTypeProperties & type() const {
^ ~~
/home/zwhua/STDF-PyTorch/ops/dcn/src/deform_conv_cuda_kernel.cu:352:101: warning: ‘c10::ScalarType detail::scalar_type(const at::DeprecatedTypeProperties&)’ is deprecated: passing at::DeprecatedTypeProperties to an AT_DISPATCH macro is deprecated, pass an at::ScalarType instead [-Wdeprecated-declarations]
AT_DISPATCH_FLOATING_TYPES_AND_HALF(
^
/home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/include/ATen/Dispatch.h:66:1: note: declared here
inline at::ScalarType scalar_type(const at::DeprecatedTypeProperties& t) {
^~~~~~~~~~~
/home/zwhua/STDF-PyTorch/ops/dcn/src/deform_conv_cuda_kernel.cu: In lambda function:
/home/zwhua/STDF-PyTorch/ops/dcn/src/deform_conv_cuda_kernel.cu:450:46: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
AT_DISPATCH_FLOATING_TYPES_AND_HALF(
^
/home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:268:1: note: declared here
DeprecatedTypeProperties & type() const {
^ ~~
/home/zwhua/STDF-PyTorch/ops/dcn/src/deform_conv_cuda_kernel.cu:450:101: warning: ‘c10::ScalarType detail::scalar_type(const at::DeprecatedTypeProperties&)’ is deprecated: passing at::DeprecatedTypeProperties to an AT_DISPATCH macro is deprecated, pass an at::ScalarType instead [-Wdeprecated-declarations]
AT_DISPATCH_FLOATING_TYPES_AND_HALF(
^
/home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/include/ATen/Dispatch.h:66:1: note: declared here
inline at::ScalarType scalar_type(const at::DeprecatedTypeProperties& t) {
^~~~~~~~~~~
/home/zwhua/STDF-PyTorch/ops/dcn/src/deform_conv_cuda_kernel.cu: In lambda function:
/home/zwhua/STDF-PyTorch/ops/dcn/src/deform_conv_cuda_kernel.cu:780:45: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
AT_DISPATCH_FLOATING_TYPES_AND_HALF(
^
/home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:268:1: note: declared here
DeprecatedTypeProperties & type() const {
^ ~~
/home/zwhua/STDF-PyTorch/ops/dcn/src/deform_conv_cuda_kernel.cu:780:100: warning: ‘c10::ScalarType detail::scalar_type(const at::DeprecatedTypeProperties&)’ is deprecated: passing at::DeprecatedTypeProperties to an AT_DISPATCH macro is deprecated, pass an at::ScalarType instead [-Wdeprecated-declarations]
AT_DISPATCH_FLOATING_TYPES_AND_HALF(
^
/home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/include/ATen/Dispatch.h:66:1: note: declared here
inline at::ScalarType scalar_type(const at::DeprecatedTypeProperties& t) {
^~~~~~~~~~~
/home/zwhua/STDF-PyTorch/ops/dcn/src/deform_conv_cuda_kernel.cu: In lambda function:
/home/zwhua/STDF-PyTorch/ops/dcn/src/deform_conv_cuda_kernel.cu:812:46: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
AT_DISPATCH_FLOATING_TYPES_AND_HALF(
^
/home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:268:1: note: declared here
DeprecatedTypeProperties & type() const {
^ ~~
/home/zwhua/STDF-PyTorch/ops/dcn/src/deform_conv_cuda_kernel.cu:812:101: warning: ‘c10::ScalarType detail::scalar_type(const at::DeprecatedTypeProperties&)’ is deprecated: passing at::DeprecatedTypeProperties to an AT_DISPATCH macro is deprecated, pass an at::ScalarType instead [-Wdeprecated-declarations]
AT_DISPATCH_FLOATING_TYPES_AND_HALF(
^
/home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/include/ATen/Dispatch.h:66:1: note: declared here
inline at::ScalarType scalar_type(const at::DeprecatedTypeProperties& t) {
^~~~~~~~~~~
/home/zwhua/STDF-PyTorch/ops/dcn/src/deform_conv_cuda_kernel.cu: In lambda function:
/home/zwhua/STDF-PyTorch/ops/dcn/src/deform_conv_cuda_kernel.cu:845:46: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
AT_DISPATCH_FLOATING_TYPES_AND_HALF(
^
/home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:268:1: note: declared here
DeprecatedTypeProperties & type() const {
^ ~~
/home/zwhua/STDF-PyTorch/ops/dcn/src/deform_conv_cuda_kernel.cu:845:101: warning: ‘c10::ScalarType detail::scalar_type(const at::DeprecatedTypeProperties&)’ is deprecated: passing at::DeprecatedTypeProperties to an AT_DISPATCH macro is deprecated, pass an at::ScalarType instead [-Wdeprecated-declarations]
AT_DISPATCH_FLOATING_TYPES_AND_HALF(
^
/home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/include/ATen/Dispatch.h:66:1: note: declared here
inline at::ScalarType scalar_type(const at::DeprecatedTypeProperties& t) {
^~~~~~~~~~~
/usr/include/c++/7/bits/basic_string.tcc: In instantiation of ‘static std::basic_string<_CharT, _Traits, _Alloc>::_Rep* std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_S_create(std::basic_string<_CharT, _Traits, _Alloc>::size_type, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’:
/usr/include/c++/7/bits/basic_string.tcc:578:28: required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&, std::forward_iterator_tag) [with _FwdIterator = const char16_t*; _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>]’
/usr/include/c++/7/bits/basic_string.h:5042:20: required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct_aux(_InIterator, _InIterator, const _Alloc&, std::__false_type) [with _InIterator = const char16_t*; _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>]’
/usr/include/c++/7/bits/basic_string.h:5063:24: required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&) [with _InIterator = const char16_t*; _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>]’
/usr/include/c++/7/bits/basic_string.tcc:656:134: required from ‘std::basic_string<_CharT, _Traits, _Alloc>::basic_string(const _CharT*, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’
/usr/include/c++/7/bits/basic_string.h:6688:95: required from here
/usr/include/c++/7/bits/basic_string.tcc:1067:16: error: cannot call member function ‘void std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_M_set_sharable() [with _CharT = char16_t; _Traits = std::char_traits<char16_t>; _Alloc = std::allocator<char16_t>]’ without object
__p->_M_set_sharable();
~~~~~~~~~^~
/usr/include/c++/7/bits/basic_string.tcc: In instantiation of ‘static std::basic_string<_CharT, _Traits, _Alloc>::_Rep* std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_S_create(std::basic_string<_CharT, _Traits, _Alloc>::size_type, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’:
/usr/include/c++/7/bits/basic_string.tcc:578:28: required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&, std::forward_iterator_tag) [with _FwdIterator = const char32_t*; _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]’
/usr/include/c++/7/bits/basic_string.h:5042:20: required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct_aux(_InIterator, _InIterator, const _Alloc&, std::__false_type) [with _InIterator = const char32_t*; _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]’
/usr/include/c++/7/bits/basic_string.h:5063:24: required from ‘static _CharT* std::basic_string<_CharT, _Traits, _Alloc>::_S_construct(_InIterator, _InIterator, const _Alloc&) [with _InIterator = const char32_t*; _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]’
/usr/include/c++/7/bits/basic_string.tcc:656:134: required from ‘std::basic_string<_CharT, _Traits, _Alloc>::basic_string(const _CharT*, std::basic_string<_CharT, _Traits, _Alloc>::size_type, const _Alloc&) [with _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>; std::basic_string<_CharT, _Traits, _Alloc>::size_type = long unsigned int]’
/usr/include/c++/7/bits/basic_string.h:6693:95: required from here
/usr/include/c++/7/bits/basic_string.tcc:1067:16: error: cannot call member function ‘void std::basic_string<_CharT, _Traits, _Alloc>::_Rep::_M_set_sharable() [with _CharT = char32_t; _Traits = std::char_traits<char32_t>; _Alloc = std::allocator<char32_t>]’ without object
[2/2] c++ -MMD -MF /home/zwhua/STDF-PyTorch/ops/dcn/build/temp.linux-x86_64-3.7/src/deform_conv_cuda.o.d -pthread -B /home/zwhua/anaconda3/envs/stdf/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/include -I/home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/include/TH -I/home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda-10.1/include -I/home/zwhua/anaconda3/envs/stdf/include/python3.7m -c -c /home/zwhua/STDF-PyTorch/ops/dcn/src/deform_conv_cuda.cpp -o /home/zwhua/STDF-PyTorch/ops/dcn/build/temp.linux-x86_64-3.7/src/deform_conv_cuda.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=deform_conv_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
In file included from /home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/include/ATen/Parallel.h:149:0,
from /home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
from /home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
from /home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
from /home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:7,
from /home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/include/torch/extension.h:4,
from /home/zwhua/STDF-PyTorch/ops/dcn/src/deform_conv_cuda.cpp:4:
/home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/include/ATen/ParallelOpenMP.h:84:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
#pragma omp parallel for if ((end - begin) >= grain_size)

ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "/home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1515, in _run_ninja_build
env=env)
File "/home/zwhua/anaconda3/envs/stdf/lib/python3.7/subprocess.py", line 512, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "setup.py", line 12, in
cmdclass={'build_ext': BuildExtension})
File "/home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/setuptools/init.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/home/zwhua/anaconda3/envs/stdf/lib/python3.7/distutils/core.py", line 148, in setup
dist.run_commands()
File "/home/zwhua/anaconda3/envs/stdf/lib/python3.7/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/home/zwhua/anaconda3/envs/stdf/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 79, in run
_build_ext.run(self)
File "/home/zwhua/anaconda3/envs/stdf/lib/python3.7/distutils/command/build_ext.py", line 340, in run
self.build_extensions()
File "/home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 649, in build_extensions
build_ext.build_extensions(self)
File "/home/zwhua/anaconda3/envs/stdf/lib/python3.7/distutils/command/build_ext.py", line 449, in build_extensions
self._build_extensions_serial()
File "/home/zwhua/anaconda3/envs/stdf/lib/python3.7/distutils/command/build_ext.py", line 474, in _build_extensions_serial
self.build_extension(ext)
File "/home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 202, in build_extension
_build_ext.build_extension(self, ext)
File "/home/zwhua/anaconda3/envs/stdf/lib/python3.7/distutils/command/build_ext.py", line 534, in build_extension
depends=ext.depends)
File "/home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 478, in unix_wrap_ninja_compile
with_cuda=with_cuda)
File "/home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1233, in _write_ninja_file_and_compile_objects
error_prefix='Error compiling objects for extension')
File "/home/zwhua/anaconda3/envs/stdf/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1529, in _run_ninja_build
raise RuntimeError(message)
RuntimeError: Error compiling objects for extension
thanks

Detail about training

Hi, thank you very much for your excellent work. I have some question about the training:

  1. There is only one pretrained model you provided. Which QP is it respond to? Is QP=37?
  2. It seems that the result (QP=37?) you provided is a little lower than the paper: 0.778dB VS 0.83dB. Is it caused by the difference of the training dataset ?
  3. In the paper STDF, when comparing with other video quality enhancement methods, the author did not retrained them in the same dataset, do you think it may not fair?

how to visualize the enhanced yuv data

thanks for your work!, After inference, the y-channel has been enhanced, I want to know how to visualize the enhanced y-channel with origin U,V data by transforming them to RGB data?

About batchsize

Hello Ryan,why batchsize==32? Because I notice that if i use 4 GPUs , maybe only 2~3GB memory is used on each GPU. Can I make batchsize or patch size(like 256 * 256 ) larger and will it decrease the performance?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.