raywzy / bringing-old-films-back-to-life Goto Github PK
View Code? Open in Web Editor NEWBringing Old Films Back to Life (CVPR 2022)
Home Page: http://raywzy.com/Old_Film/
Bringing Old Films Back to Life (CVPR 2022)
Home Page: http://raywzy.com/Old_Film/
As a result, I get two videos, one original, the other converted. I need to receive only the converted form as output.
Hey,
did somebody know what the temporal_length, temporal_stride means?
Is there a temp dir for the picts that are already finished, or stay they in ram?
Cheers george
hello , Dr. Wan,When I run the code from your previous paper《bringing old photos back to life》,it occurs error:
Traceback (most recent call last):
File "train_domain_B.py", line 48, in
model = create_model(opt)
File "D:\pythonProject\6_18\BringOldBack\Global\models\models.py", line 17, in create_model
model.initialize(opt)
File "D:\pythonProject\6_18\BringOldBack\Global\models\pix2pixHD_model.py", line 40, in initialize
opt.n_blocks_local, opt.norm, gpu_ids=self.gpu_ids, opt=opt)
File "D:\pythonProject\6_18\BringOldBack\Global\models\networks.py", line 60, in define_G
netG = GlobalGenerator_v2(input_nc, output_nc, ngf, k_size, n_downsample_global, n_blocks_global, norm_layer, opt=opt)
NameError: name 'GlobalGenerator_v2' is not defined
how did you solve it?could you please share with me ? thank you.
The validation set in the profile I'm looking at doesn't seem to be using REDS, but real film video footage, is that correct?
is there any video tutorial? I think we need a tutorial.
I have found the checkpoint files in the directory, but I didn`t notice the code or parameter to load the checkpoints. Did I just miss it? Or maybe I asked a stupid question : )
Hi, it such a amazing work!
I am trying to train the model on another training data, and I am wondering Is there anywhere to get Test, validation set for dataset? I mean the one after added degradations, in Quantitive result in Table 1.
thanks
this project is extremely impressive . we need colorization code
Hello!
After training, I got several model files in my OUTPUT folder, such as:
net_D_xxx.pth & net_G_xxx.pth & optimizer_xxx.pth
For the next step, how to use these model to colorize/denoise my black&white footage?
(I've converted my footage to be png sequence frame images and put them into a folder already)
Any tips will be helpful!! Thanks!!
Great work. Can you provide the train/test code for the colorization task?
Is there any Colab notebook available having the implementation of this Github repo just like the one for the photos.
Great work! Can you provide the test code and pre-training model? @raywzy
Upon launching :
set CUDA_VISIBLE_DEVICES=0 & python VP_code/main_gan.py --name RNN_Swin_4 --model_name RNN_Swin_4 --epoch 20 --nodes 1 --gpus 1 --discriminator_name discriminator_v2 --which_gan hinge
it tries to establish an unsolicited connection:
[W ..\torch\csrc\distributed\c10d\socket.cpp:601] [c10d] The client socket has failed to connect to [link.redgiant.com]:28172 (system error: 10049 - The requested address is not valid in its context.).
P.S Due to incapability Windows 10 with "nccl" I changed line 41in "main_gan.py" from
backend = 'nccl' to backend = 'gloo'
I have a nice RX7900XTX and would prefer to not have to go out and buy an nvidia as well. Not enough slots with these new cards.
I'm very interested in restoring some old family film that has been transferred to horrible interlaced 480i. Thanks.
Thanks for open source, Does it needed to use pixel loss training first, then load the pre-training model and add GAN loss to continue training?
Hey, is there anyone that has the same problem?
The software runs for a couple of folders then stops with that error one a folder.
Cheers george
The algorithm works very well, but how to get the same output size as the input image? All output images are (480x368)?
Thanks a lot and congratulations for this great work
How to generate paired data via the Video Degradation Model? How to use the relabled scratch templates?
In your enclosed demo video I can see noticeable spatio temporal Inconsistency (flickering) in original and processed videos (both restored and colorized)-just take a look at the upper right part (sky) of your video. As I understand restoration process should remove such defects.
Do you plan to resolve this issue?
Currently processed videos get converted to black&white regardless of the color of the sources. Is it possible to retain/enhance original colors in output similar to your Bringing-Old-Photos-Back-to-Life - "Quality Restore"?
It's been quite a while since the author created this repo. Do you have expected date for code release?
Excellent work! When I use your pre-training model to test the test movie you provided, the input_url and gt_url are both in the same folder, but there will be an array out-of-bounds error later. Can you give me some suggestions? thank you!
my order:CUDA_VISIBLE_DEVICES=0 python VP_code/test.py --name RNN_Swin_4 --model_name RNN_Swin_4 --input_video_url ./test_data/data_1/003 --gt_video_url /test_data/data_1/003
Hi, congratulations for this work, I could not execute the scripts because I get this error:
FileNotFoundError: [Errno 2] No such file or directory: './configs/RNN_Swin_4.yaml'
I am trying to train the model for the colorization task.
How did you train the model for colorization task?
Did you use the pre-trained model of Deep-Exemplar Video Colorization for preparing the dataset of colorization?
Can you provide the pre-trained model of colorization?
Thank you
Hi, could you explain how you compute the backward and forward hidden states during training?
In the paper, you state that s(t-1) has aggregated all available information from frame 0 to frame t − 1 but if I understood it correctly this means that for each example you need to precompute the hidden states with all the frames of the video.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.