Comments (9)
that did it, getting a large/high quality video now
took 14min locally vs 2-3min before (both cpu), also needed more ram up to 22-24GB
from 3d-photo-inpainting.
@victusfate I tried your way and the results are better! For others to follow: These are the steps I took,
-
I hardcoded the frac in main.py to 1 . frac=1.. This make sure the input and output sizes are the same.
-
image_width = image.shape[1]
run_depth(device,[sample['ref_img_fi']], config['src_folder'], config['depth_folder'],
config['MiDaS_model_ckpt'], MonoDepthNet, MiDaS_utils, target_w=image_width) -
run.py is changed as:
scale = target_w / max(img.shape[0], img.shape[1]) -
In mesh folder I uncommented the following: # img = cv2.GaussianBlur(img,(int(init_factor//2 * 2 + 1), int(init_factor//2 * 2 + 1)), 0)
-
In mesh folder I also modified the output video setting
clip.write_videofile(os.path.join(output_dir, output_name), fps=config['fps'], codec='libx264',bitrate='2000k') -
commented the resize code in main.py
#image = cv2.resize(image, (config['output_w'], config['output_h']), interpolation=cv2.INTER_AREA)
PS., If I play with longer_side_len argument to much larger resolution (4500), the code would take for ever (I have one GPU). The RAM shoots up to over 64GB and then the code exit! lack of memory perhaps.. Is there any way I can work with high resolution image with a 64GB RAM or do I need much larger RAM.. If so, how big of a RAM do I need? This seems like a memory leakage for large files!
Another issue is that if you output the same size image in the video, some of the pixels would not have the depth info when you zoon in or circle around the object.. This does not happen when you resize the input image. However, if you output the same size as the input by folllowing the steps above, the output video effect would have "gray blocks" with no depth info! dont know why? any idea?
from 3d-photo-inpainting.
Same problem..
I tried to change some argument but the final result is blurry and bad quality.
I tried on Colab.
Any idea ?
from 3d-photo-inpainting.
I ended up setting up the output video resolution to match the input
# frac = config['longer_side_len'] / max(config['output_h'], config['output_w'])
# config['output_h'], config['output_w'] = int(config['output_h'] * frac), int(config['output_w'] * frac)
frac = 1
config['original_h'], config['original_w'] = config['output_h'], config['output_w']
I was going to look at the output video bitrate next, but any help on increasing the sharpness of the output video to more closely approximate the input image would be great.
trying libx264 + higher bitrate on the output 2+ Mbps
clip.write_videofile(os.path.join(output_dir, output_name), fps=config['fps'], codec='libx264',bitrate='2000k')
also going to explore skipping the gaussian blur (may be req by the alg)
# img = cv2.GaussianBlur(img,(int(init_factor//2 * 2 + 1), int(init_factor//2 * 2 + 1)), 0)
from 3d-photo-inpainting.
ended up passing in the original image width to run_depth and using it vs 640.
testing it now to ensure the output resolution = the input.
After that I'll circle back to the output video quality
in main.py
image_width = image.shape[1]
run_depth(device,[sample['ref_img_fi']], config['src_folder'], config['depth_folder'],
config['MiDaS_model_ckpt'], MonoDepthNet, MiDaS_utils, target_w=image_width)
in run.py run_depth
scale = target_w / max(img.shape[0], img.shape[1])
from 3d-photo-inpainting.
ended up passing in the original image width to run_depth and using it vs 640.
testing it now to ensure the output resolution = the input.After that I'll circle back to the output video quality
in main.py
image_width = image.shape[1] run_depth(device,[sample['ref_img_fi']], config['src_folder'], config['depth_folder'], config['MiDaS_model_ckpt'], MonoDepthNet, MiDaS_utils, target_w=image_width)
in run.py run_depth
scale = target_w / max(img.shape[0], img.shape[1])
This still seems to limit my output to a width of 960px. Did you made another other changes?
from 3d-photo-inpainting.
I've also changed the frac variable but now main.py won't complete:
running on device 0
0% 0/1 [00:00<?, ?it/s]Current Source ==> 2mars
Running depth extraction at 1595581689.5904121
initialize
device: cpu
start processing
processing image/2mars.jpg (1/1)
torch.Size([1, 3, 352, 384])
finished
Start Running 3D_Photo ...
Loading edge model at 1595581910.257932
Loading depth model at 1595581919.6054657
Loading rgb model at 1595581925.925136
Writing depth ply (and basically doing everything) at 1595581932.251731
^C
from 3d-photo-inpainting.
@victusfate I tried your way and the results are better! For others to follow: These are the steps I took,
I hardcoded the frac in main.py to 1 . frac=1.. This make sure the input and output sizes are the same.
image_width = image.shape[1]
run_depth(device,[sample['ref_img_fi']], config['src_folder'], config['depth_folder'],
config['MiDaS_model_ckpt'], MonoDepthNet, MiDaS_utils, target_w=image_width)run.py is changed as:
scale = target_w / max(img.shape[0], img.shape[1])In mesh folder I uncommented the following: # img = cv2.GaussianBlur(img,(int(init_factor//2 * 2 + 1), int(init_factor//2 * 2 + 1)), 0)
In mesh folder I also modified the output video setting
clip.write_videofile(os.path.join(output_dir, output_name), fps=config['fps'], codec='libx264',bitrate='2000k')commented the resize code in main.py
#image = cv2.resize(image, (config['output_w'], config['output_h']), interpolation=cv2.INTER_AREA)PS., If I play with longer_side_len argument to much larger resolution (4500), the code would take for ever (I have one GPU). The RAM shoots up to over 64GB and then the code exit! lack of memory perhaps.. Is there any way I can work with high resolution image with a 64GB RAM or do I need much larger RAM.. If so, how big of a RAM do I need? This seems like a memory leakage for large files!
Another issue is that if you output the same size image in the video, some of the pixels would not have the depth info when you zoon in or circle around the object.. This does not happen when you resize the input image. However, if you output the same size as the input by folllowing the steps above, the output video effect would have "gray blocks" with no depth info! dont know why? any idea?
Hi, I tried your code, but the error is as follows, have you encountered this problem?
Traceback (most recent call last):
File "main.py", line 89, in
vis_photos, vis_depths = sparse_bilateral_filtering(depth.copy(), image.copy(), config, num_iter=config['sparse_iter'], spdb=False)
File "/home/w/3d-photo-inpainting/bilateral_filtering.py", line 31, in sparse_bilateral_filtering
vis_image[u_over > 0] = np.array([0, 0, 0])
IndexError: boolean index did not match indexed array along dimension 0; dimension is 2848 but corresponding boolean dimension is 425
from 3d-photo-inpainting.
6. #image = cv2.resize(image, (config['output_w'], config['output_h']), interpolation=cv2.INTER_AREA)
Ensure the following code has not been commented out in main.py.
image = cv2.resize(image, (config['output_w'], config['output_h']), interpolation=cv2.INTER_AREA)
This should fix your issue.
from 3d-photo-inpainting.
Related Issues (20)
- 3d
- Why the mesh result is blank?
- RuntimeError: CUDA error: no kernel image is available for execution on the device HOT 2
- cugraph accelerate
- Please update the colab its no longer working. HOT 12
- ModuleNotFoundError: No module named 'vispy' HOT 3
- What is the function 'extrapolate' in mesh.py?
- 3dp
- lookAt configuration HOT 1
- Can this colab be fixed, or is there another way of doing 3D photo in painting HOT 2
- FileNotFoundError
- Video generation process stuck in 3D-photo-inpainting
- Alternative repo? HOT 5
- Hello, how can I make it faster and which parts can be optimized?
- How to produce a right eye photo? HOT 3
- Trying to install on windows without luck. Is it possible?
- any chance of an upgrade to support modern python (3.10.x)?
- 📣 Do not waste your time with this old repo; here is the working one! HOT 6
- Transformar 2d em 3d
- How to obtain quantitative evaluation metrics in the way of the paper?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from 3d-photo-inpainting.