Code Monkey home page Code Monkey logo

edvr's Introduction

edvr's People

Contributors

cugtyt avatar henrymai avatar my-zhu avatar wenlongzhang0517 avatar xinntao avatar zenjieli avatar zestloveheart avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

edvr's Issues

About the create_lmdb_mp.py

  1. I created my dataset, my dataset had 200 folders(included val, total 120GB), but i got error, it's said that i didn't have enough memory; Thus, i want to know what the hardware configuration.
  2. Did you test your code with VMAF? The video train with rgb or yuv420 whether get the different result by VMAF?

Aboat training

In the training .yml file,what does the "ft_tsa_only" mean?
when I doesn't use it, prompted WARNING: Offset mean is XX, larger than 100.
and the loss is larger than use the "ft_tsa_only" to train

询问网络细节

很棒的工作!看完模型代码和论文,有几个问题。
1、Relu层的选择,是否有什么经验?比如,大多数SR网络用Relu,DBPN/RBPN用PRelu,ESRGAN和EDVR用LRelu,是基于什么考究?或者说是否以后的实验我什么都别管,直接用LRelu就是最优的?
2、Resblock_noBN中,对Conv1Conv2使用kaiming初始化带来的提升我记得在ESRGAN的补充材料里也出现过,是否也是以后直接用就行,无计算成本的性能提升?
3、如果要拓展参数量,一个是channel的变化,这个很简单,因为所有块都对齐了nf的;
如果是拓展block数量呢?EDVR的输入参数,前后RB是可变的,但是PCD和AttentionFusion module,block是固定写死的,是否意味着,如果靠拓展深度加大网络参数量和计算量从而提升指标,A、这2个模块的深度不用管,加大深度不划算?B、如果这2个模块也要加大深度,应该加大哪些层的深度?C、第三个选择,只加大backRB的深度,frontRB和2个module的深度不用管?
Table4对照实验中的配置,和比赛最终结果的配置,是否只有channel和fusion后RB的深度变化了?
4、相比SISR中常用的self-ensemble,8倍的计算量,EDVR的测试脚本中只有4倍,少了90度旋转的过程,这个是基于什么考究?是否因为提升几乎很小,所以就不用再通过2x计算量来提升指标了?
5、Table5中4个track均以碾压的优势取得第一,这个非常值得肯定。第二名的方法在论文《Adapting Image Super-Resolution State-of-the-arts and Learning Multi-model Ensemble for Video Super-Resolution》中(arxiv上可以搜到),idea非常simple,是典型的刷榜流做法,第一个idea,把SISR(RCAN、RDN)中的输入改为相邻帧输入,直接改第一个Conv的输入channel,就已经提升了1dB,超过DUF,第二个idea就是3模型结果ensemble,结果进一步提升0.12(作用很小了,可能还不如self-ensemble带来的提升,不划算,这个先不提)。这里想看下DUF、标配RDN、RCAN和EDVR比赛配置的GFLOPS对比
6、预先进行拉伸在SISR中一直被视为不太好的做法,EDVR中用了bilinear拉伸LR_center和HR做残差,这个有实验过去掉这步吗?收敛速度上我觉得做拉伸做残差更快,但是精度不一定。(这个的目的是不是和EDSR-Pytorch中”add_mean”和”sub_mean”差不多的?)
7、训练细节上,本文使用的是Charbonnier Loss,这个最早我记得是在LapSRN中出现,之后普遍用L1Loss。这个是有什么考究吗?训练浅层网络作为深层网络的初始化,这个求具体细节,是哪些层的权重初始化了,深浅改变的是什么层的深度?
8、有关PCD对齐模块
(1)108、117行forward中,offset_conv2里concat乘2是什么作用?(后面TSA层好像也有乘2的)
(2)为什么不排除中间帧(t=t+i)只做4个?不可以直接采用l1_fea的输入吗?

谢谢!

..

天池最近有个视频超分辨率的比赛 你们可以去参加一下 顺便拿个第一

ask about the video input size

hello ,I tried your models for my own imgs. but i got an error ,then i change the input img size .it worked .so, i want to know whether the input size must be 1280 * 720 if i use the models that you privide?

An error occurred while running the test code

An error occurred while running the test code:
19-06-12 21:59:26.515 - INFO: Data: Vid4 - ../datasets/Vid4/BIx4/*
19-06-12 21:59:26.515 - INFO: Padding mode: new_info
19-06-12 21:59:26.515 - INFO: Model path: ../experiments/pretrained_models/EDVR_Vimeo90K_SR_L.pth
19-06-12 21:59:26.515 - INFO: Save images: True
19-06-12 21:59:26.515 - INFO: Flip Test: False
error in modulated_deformable_im2col_cuda: CUDA driver version is insufficient for CUDA runtime version
Traceback (most recent call last):
......
File "/home/wxy//EDVR/codes/models/modules/DCNv2/dcn_v2.py", line 135, in forward
if offset_mean > 100:
RuntimeError: CUDA error: an illegal memory access was encountered

The error prompted me that my CUDA did not match the driver version, but it turned out that my CUDA was working properly.
My pytorch version is 1.0.1,CUDA version is 9.0,And Make.sh has run successfully.Look forward to your reply!

DUF Training

Hi, have you ever trained the DUF from scratch? I found it is difficult to train the DUF from scratch with Vimeo90k dataset.

Availability of output frames for REDS4 clips

Hi Xintao,

Would you be able to share the output frames generated by EDVR for REDS4 clips for video deblurring problem. That shall be very helpful, since, you already have them and it can save me some time instead or running the inference.

Thanks,
Touqeer

_ZN2at19UndefinedTensorImpl10_singletonE

hi, I use ubuntu18.04; python 3.6; pytorch 1.1.0. when i run python3 test_Vid4_REDS4_with_GT.py, i meet a follow error:

ImportError: /home/lichen/EDVR/codes/models/modules/DCNv2/_ext.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN2at19UndefinedTensorImpl10_singletonE

what can i do?

TEST image size

hi, when I test my own images, the image size can't be divided by 4,then the size in PCD Align Module will be different between refference image and the neighbor image,how to solve this problem?

about the loss function

Hi, when I trained with the CharbonnierLoss , the loss is very very big, but when I trained with L1 loss, it is normal, what caused this phenomenon, could you give me some advice?

Another problem in VSR

How about subtitle region ?
There is no motion in subtitle region, but it changes instantaneous between two frames.
Example figure(Average between 5 frames. The head region can be aligned, but the subtitle region can't be aligned.):
avg

Now we can't label the subtitle region, but the model should be robust for these kind of situations.(Another situation is for anime videos(2D painting). Just use SISR methods for SR.)
Will it influence the training when there is subtitle region included in the input patches of network?

This problem is caused by the diffenrence between experimental environment in academic research and practical application. In ideal experimental datasets, subtitle is not included.

Segmentation fault (core dumped) training on 4 gpus

[root@A01-R04-I221-51 codes]# python3 train.py -opt options/train/train_EDVR_woTSA_M_vimeo90k.yml
export CUDA_VISIBLE_DEVICES=0,1,2,3
Disabled distributed training.
Path already exists. Rename it to [/home/zzt/EDVR/experiments/001_EDVRwoTSA_scratch_lr4e-4_600k_vimeo90k_LrCAR4S_archived_190628-144447]
19-06-28 14:44:47.113 - INFO: name: 001_EDVRwoTSA_scratch_lr4e-4_600k_vimeo90k_LrCAR4S
use_tb_logger: True
model: VideoSR_base
distortion: sr
scale: 4
gpu_ids: [0, 1, 2, 3]
datasets:[
train:[
name: Vimeo90K
mode: Vimeo90K
interval_list: [1]
random_reverse: False
border_mode: False
dataroot_GT: /home/zzt/EDVR/datasets/vimeo90k_train_GT.lmdb
dataroot_LQ: /home/zzt/EDVR/datasets/vimeo90k_train_LR7frames.lmdb
cache_keys: Vimeo90K_train_keys.pkl
N_frames: 5
use_shuffle: True
n_workers: 3
batch_size: 32
GT_size: 256
LQ_size: 64
use_flip: True
use_rot: True
color: RGB
phase: train
scale: 4
data_type: lmdb
]
]
network_G:[
which_model_G: EDVR
nf: 64
nframes: 5
groups: 8
front_RBs: 5
back_RBs: 10
predeblur: False
HR_in: False
w_TSA: False
scale: 4
]
path:[
pretrain_model_G: None
strict_load: True
resume_state: None
root: /home/zzt/EDVR
experiments_root: /home/zzt/EDVR/experiments/001_EDVRwoTSA_scratch_lr4e-4_600k_vimeo90k_LrCAR4S
models: /home/zzt/EDVR/experiments/001_EDVRwoTSA_scratch_lr4e-4_600k_vimeo90k_LrCAR4S/models
training_state: /home/zzt/EDVR/experiments/001_EDVRwoTSA_scratch_lr4e-4_600k_vimeo90k_LrCAR4S/training_state
log: /home/zzt/EDVR/experiments/001_EDVRwoTSA_scratch_lr4e-4_600k_vimeo90k_LrCAR4S
val_images: /home/zzt/EDVR/experiments/001_EDVRwoTSA_scratch_lr4e-4_600k_vimeo90k_LrCAR4S/val_images
]
train:[
lr_G: 0.0004
lr_scheme: CosineAnnealingLR_Restart
beta1: 0.9
beta2: 0.99
niter: 600000
warmup_iter: -1
T_period: [150000, 150000, 150000, 150000]
restarts: [150000, 300000, 450000]
restart_weights: [1, 1, 1]
eta_min: 1e-07
pixel_criterion: cb
pixel_weight: 1.0
val_freq: 2000.0
manual_seed: 0
]
logger:[
print_freq: 1
save_checkpoint_freq: 2000.0
]
is_train: True
dist: False

19-06-28 14:44:50.625 - INFO: Random seed: 0
19-06-28 14:44:50.628 - INFO: Temporal augmentation interval list: [1], with random reverse is False.
19-06-28 14:44:50.629 - INFO: Using cache keys: Vimeo90K_train_keys.pkl
19-06-28 14:44:50.629 - INFO: Using cache keys - Vimeo90K_train_keys.pkl.
19-06-28 14:44:50.638 - INFO: Dataset [Vimeo90KDataset - Vimeo90K] is created.
19-06-28 14:44:50.638 - INFO: Number of train images: 64,612, iters: 2,020
19-06-28 14:44:50.638 - INFO: Total epochs needed: 298 for iters 600,000
19-06-28 14:44:53.798 - INFO: Network G structure: DataParallel - EDVR, with parameters: 2,996,259
19-06-28 14:44:53.799 - INFO: EDVR(
(conv_first): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(feature_extraction): Sequential(
(0): ResidualBlock_noBN(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(1): ResidualBlock_noBN(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(2): ResidualBlock_noBN(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(3): ResidualBlock_noBN(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(4): ResidualBlock_noBN(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
)
(fea_L2_conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(fea_L2_conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(fea_L3_conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(fea_L3_conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pcd_align): PCD_Align(
(L3_offset_conv1): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(L3_offset_conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(L3_dcnpack): DCN_sep(
(conv_offset_mask): Conv2d(64, 216, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(L2_offset_conv1): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(L2_offset_conv2): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(L2_offset_conv3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(L2_dcnpack): DCN_sep(
(conv_offset_mask): Conv2d(64, 216, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(L2_fea_conv): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(L1_offset_conv1): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(L1_offset_conv2): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(L1_offset_conv3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(L1_dcnpack): DCN_sep(
(conv_offset_mask): Conv2d(64, 216, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(L1_fea_conv): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(cas_offset_conv1): Conv2d(128, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(cas_offset_conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(cas_dcnpack): DCN_sep(
(conv_offset_mask): Conv2d(64, 216, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(lrelu): LeakyReLU(negative_slope=0.1, inplace)
)
(tsa_fusion): Conv2d(320, 64, kernel_size=(1, 1), stride=(1, 1))
(recon_trunk): Sequential(
(0): ResidualBlock_noBN(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(1): ResidualBlock_noBN(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(2): ResidualBlock_noBN(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(3): ResidualBlock_noBN(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(4): ResidualBlock_noBN(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(5): ResidualBlock_noBN(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(6): ResidualBlock_noBN(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(7): ResidualBlock_noBN(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(8): ResidualBlock_noBN(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(9): ResidualBlock_noBN(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
)
(upconv1): Conv2d(64, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(upconv2): Conv2d(64, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pixel_shuffle): PixelShuffle(upscale_factor=2)
(HRconv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv_last): Conv2d(64, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(lrelu): LeakyReLU(negative_slope=0.1, inplace)
)
19-06-28 14:44:53.801 - INFO: Model [VideoSRBaseModel] is created.
19-06-28 14:44:53.802 - INFO: Start training from epoch: 0, iter: 0
Segmentation fault (core dumped)

can anyone meet this error

when I run, I meet this error. May be pytoch version . Is anyone can help me to resolve this, thank you very much.
I use anaconda3, pytorch version is 1.0.1.
image

can anyone meet this error("段错误")

when I run test_Vid4_REDS4_with_GT.py, I got a error . Is anyone can help me to resolve this, thank you very much.
I use anaconda3, python 3.6.7 pytorch version is 1.1.0.

python ./test_Vid4_REDS4_with_GT.py
19-05-30 15:01:21.580 - INFO: Data: Vid4 - ../datasets/Vid4/BIx4/*
19-05-30 15:01:21.583 - INFO: Padding mode: new_info
19-05-30 15:01:21.583 - INFO: Model path: ../experiments/pretrained_models/EDVR_Vimeo90K_SR_L.pth
19-05-30 15:01:21.583 - INFO: Save images: True
19-05-30 15:01:21.584 - INFO: Flip Test: False
段错误

I find that the one in line 139("model_output = model(imgs_in)") caused the error, but I did not know why.

JSON File

Hi, thanks for sharing the training code
where's the option json file?
parser.add_argument('-opt', type=str, help='Path to option JSON file.')

How does it look like?

IndexError: list index out of range

Hello
I'm trying to use the test_Vid4_REDS4_with_GT.py script with a foder containing 98 png files (named from 00000001.png to 00000099.png).
The png resolution is 320*240.

when i use the script i got this error:

  File "test_Vid4_REDS4_with_GT.py", line 276, in <module>
    main()
  File "test_Vid4_REDS4_with_GT.py", line 217, in main
    GT = np.copy(img_GT_l[img_idx])
IndexError: list index out of range

And when i look at result i can see it have processed the first png file but not the others

Thanks for helping ^^

Error on Vid4 dataset

This error occurs with the testing codes:

19-05-28 13:52:03.735 - INFO: Data: Vid4 - ../datasets/Vid4/BIx4/*
19-05-28 13:52:03.736 - INFO: Padding mode: new_info
19-05-28 13:52:03.736 - INFO: Model path: ../experiments/pretrained_models/EDVR_Vimeo90K_SR_L.pth
19-05-28 13:52:03.736 - INFO: Save images: True
19-05-28 13:52:03.736 - INFO: Flip Test: False
Traceback (most recent call last):
File "test_Vid4_REDS4_with_GT.py", line 275, in
main()
File "test_Vid4_REDS4_with_GT.py", line 228, in main
crt_psnr = util.calculate_psnr(cropped_output * 255, cropped_GT * 255)
EDVR-master\codes\utils\util.py", line 138, in calculate_psnr
mse = np.mean((img1 - img2)**2)
ValueError: operands could not be broadcast together with shapes (576,720) (144,180)

not convergence, why?

19-07-01 09:44:55.725 - INFO: <epoch:221, iter: 448,100, lr:(2.583e-07,)>l_pix: 8.0168e+04
19-07-01 09:45:46.968 - INFO: <epoch:221, iter: 448,200, lr:(2.421e-07,)>l_pix: 6.3542e+04
19-07-01 09:46:39.644 - INFO: <epoch:222, iter: 448,300, lr:(2.267e-07,)>l_pix: 6.8591e+04
19-07-01 09:47:30.610 - INFO: <epoch:222, iter: 448,400, lr:(2.123e-07,)>l_pix: 6.2289e+04
19-07-01 09:48:21.469 - INFO: <epoch:222, iter: 448,500, lr:(1.987e-07,)>l_pix: 6.7909e+04
19-07-01 09:49:12.473 - INFO: <epoch:222, iter: 448,600, lr:(1.859e-07,)>l_pix: 5.4850e+04
19-07-01 09:50:03.302 - INFO: <epoch:222, iter: 448,700, lr:(1.741e-07,)>l_pix: 7.6995e+04
19-07-01 09:50:54.809 - INFO: <epoch:222, iter: 448,800, lr:(1.631e-07,)>l_pix: 6.6559e+04
19-07-01 09:51:45.599 - INFO: <epoch:222, iter: 448,900, lr:(1.531e-07,)>l_pix: 5.5888e+04
19-07-01 09:52:36.882 - INFO: <epoch:222, iter: 449,000, lr:(1.439e-07,)>l_pix: 5.7516e+04
19-07-01 09:53:27.801 - INFO: <epoch:222, iter: 449,100, lr:(1.355e-07,)>l_pix: 6.3166e+04
19-07-01 09:54:18.680 - INFO: <epoch:222, iter: 449,200, lr:(1.281e-07,)>l_pix: 5.9250e+04
19-07-01 09:55:09.492 - INFO: <epoch:222, iter: 449,300, lr:(1.215e-07,)>l_pix: 7.9755e+04
19-07-01 09:56:00.916 - INFO: <epoch:222, iter: 449,400, lr:(1.158e-07,)>l_pix: 7.5331e+04
19-07-01 09:56:51.746 - INFO: <epoch:222, iter: 449,500, lr:(1.110e-07,)>l_pix: 6.1268e+04
19-07-01 09:57:42.739 - INFO: <epoch:222, iter: 449,600, lr:(1.070e-07,)>l_pix: 5.8020e+04
19-07-01 09:58:34.047 - INFO: <epoch:222, iter: 449,700, lr:(1.039e-07,)>l_pix: 7.0684e+04
19-07-01 09:59:25.079 - INFO: <epoch:222, iter: 449,800, lr:(1.018e-07,)>l_pix: 6.1002e+04
19-07-01 10:00:16.479 - INFO: <epoch:222, iter: 449,900, lr:(1.004e-07,)>l_pix: 6.2216e+04
19-07-01 10:01:07.545 - INFO: <epoch:222, iter: 450,000, lr:(4.000e-04,)>l_pix: 6.2134e+04
19-07-01 10:01:07.546 - INFO: Saving models and training states.
19-07-01 10:01:58.558 - INFO: <epoch:222, iter: 450,100, lr:(4.000e-04,)>l_pix: 6.8700e+04
19-07-01 10:02:49.626 - INFO: <epoch:222, iter: 450,200, lr:(4.000e-04,)>l_pix: 5.7384e+04
19-07-01 10:03:42.350 - INFO: <epoch:223, iter: 450,300, lr:(4.000e-04,)>l_pix: 5.6992e+04
19-07-01 10:04:33.524 - INFO: <epoch:223, iter: 450,400, lr:(4.000e-04,)>l_pix: 5.5664e+04
19-07-01 10:05:24.229 - INFO: <epoch:223, iter: 450,500, lr:(4.000e-04,)>l_pix: 7.6057e+04
19-07-01 10:06:15.174 - INFO: <epoch:223, iter: 450,600, lr:(4.000e-04,)>l_pix: 7.2010e+04
19-07-01 10:07:06.118 - INFO: <epoch:223, iter: 450,700, lr:(4.000e-04,)>l_pix: 6.0641e+04
19-07-01 10:07:56.997 - INFO: <epoch:223, iter: 450,800, lr:(4.000e-04,)>l_pix: 5.9802e+04
19-07-01 10:08:47.783 - INFO: <epoch:223, iter: 450,900, lr:(4.000e-04,)>l_pix: 6.6285e+04
19-07-01 10:09:38.584 - INFO: <epoch:223, iter: 451,000, lr:(4.000e-04,)>l_pix: 6.3312e+04
19-07-01 10:10:29.622 - INFO: <epoch:223, iter: 451,100, lr:(3.999e-04,)>l_pix: 6.5559e+04
19-07-01 10:11:20.408 - INFO: <epoch:223, iter: 451,200, lr:(3.999e-04,)>l_pix: 7.3202e+04
19-07-01 10:12:11.144 - INFO: <epoch:223, iter: 451,300, lr:(3.999e-04,)>l_pix: 7.1605e+04
19-07-01 10:13:02.022 - INFO: <epoch:223, iter: 451,400, lr:(3.999e-04,)>l_pix: 6.4014e+04
19-07-01 10:13:53.078 - INFO: <epoch:223, iter: 451,500, lr:(3.999e-04,)>l_pix: 6.2185e+04
19-07-01 10:14:44.101 - INFO: <epoch:223, iter: 451,600, lr:(3.999e-04,)>l_pix: 5.7227e+04
19-07-01 10:15:34.804 - INFO: <epoch:223, iter: 451,700, lr:(3.999e-04,)>l_pix: 7.6518e+04
19-07-01 10:16:25.789 - INFO: <epoch:223, iter: 451,800, lr:(3.999e-04,)>l_pix: 6.0232e+04
19-07-01 10:17:16.691 - INFO: <epoch:223, iter: 451,900, lr:(3.998e-04,)>l_pix: 7.3397e+04

about the yml file

hi,thanks for your code
but when i want to train, i can't find the yml file that was mentioned in the code. can you upload the yaml file or show what the yml file is .
Thanks

Error on start

(base) t:\EDVR\codes>python test_Vid4_REDS4_with_GT2.py
19-06-15 20:53:29.080 - INFO: Data: my - ../datasets/tammy/*
19-06-15 20:53:29.081 - INFO: Padding mode: replicate
19-06-15 20:53:29.082 - INFO: Model path: ../experiments/pretrained_models/EDVR_
Vimeo90K_SR_L.pth
19-06-15 20:53:29.084 - INFO: Save images: True
19-06-15 20:53:29.085 - INFO: Flip Test: False
Traceback (most recent call last):
File "test_Vid4_REDS4_with_GT2.py", line 284, in
main()
File "test_Vid4_REDS4_with_GT2.py", line 177, in main
imgs = read_seq_imgs(sub_folder)
File "test_Vid4_REDS4_with_GT2.py", line 105, in read_seq_imgs
imgs = np.stack(img_l, axis=0)
File "C:\Anaconda\lib\site-packages\numpy\core\shape_base.py", line 349, in st
ack
raise ValueError('need at least one array to stack')
ValueError: need at least one array to stack

RuntimeError when training

Hello,I met this problem when training.
File "train.py", line 22, in init_dist
mp.set_start_method('spawn')
File "/home/ai/anaconda3/lib/python3.7/multiprocessing/context.py", line 242, in set_start_method
raise RuntimeError('context has already been set')

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.