Code Monkey home page Code Monkey logo

Comments (9)

victusfate avatar victusfate commented on May 16, 2024 3

that did it, getting a large/high quality video now
took 14min locally vs 2-3min before (both cpu), also needed more ram up to 22-24GB

from 3d-photo-inpainting.

peymanrah avatar peymanrah commented on May 16, 2024 3

@victusfate I tried your way and the results are better! For others to follow: These are the steps I took,

  1. I hardcoded the frac in main.py to 1 . frac=1.. This make sure the input and output sizes are the same.

  2. image_width = image.shape[1]
    

    run_depth(device,[sample['ref_img_fi']], config['src_folder'], config['depth_folder'],
    config['MiDaS_model_ckpt'], MonoDepthNet, MiDaS_utils, target_w=image_width)

  3. run.py is changed as:
    scale = target_w / max(img.shape[0], img.shape[1])

  4. In mesh folder I uncommented the following: # img = cv2.GaussianBlur(img,(int(init_factor//2 * 2 + 1), int(init_factor//2 * 2 + 1)), 0)

  5. In mesh folder I also modified the output video setting
    clip.write_videofile(os.path.join(output_dir, output_name), fps=config['fps'], codec='libx264',bitrate='2000k')

  6. commented the resize code in main.py
    #image = cv2.resize(image, (config['output_w'], config['output_h']), interpolation=cv2.INTER_AREA)

PS., If I play with longer_side_len argument to much larger resolution (4500), the code would take for ever (I have one GPU). The RAM shoots up to over 64GB and then the code exit! lack of memory perhaps.. Is there any way I can work with high resolution image with a 64GB RAM or do I need much larger RAM.. If so, how big of a RAM do I need? This seems like a memory leakage for large files!

Another issue is that if you output the same size image in the video, some of the pixels would not have the depth info when you zoon in or circle around the object.. This does not happen when you resize the input image. However, if you output the same size as the input by folllowing the steps above, the output video effect would have "gray blocks" with no depth info! dont know why? any idea?

from 3d-photo-inpainting.

cless-zor avatar cless-zor commented on May 16, 2024

Same problem..
I tried to change some argument but the final result is blurry and bad quality.
I tried on Colab.
Any idea ?

from 3d-photo-inpainting.

victusfate avatar victusfate commented on May 16, 2024

I ended up setting up the output video resolution to match the input

    # frac = config['longer_side_len'] / max(config['output_h'], config['output_w'])
    # config['output_h'], config['output_w'] = int(config['output_h'] * frac), int(config['output_w'] * frac)
    frac = 1
    config['original_h'], config['original_w'] = config['output_h'], config['output_w']

I was going to look at the output video bitrate next, but any help on increasing the sharpness of the output video to more closely approximate the input image would be great.

trying libx264 + higher bitrate on the output 2+ Mbps

        clip.write_videofile(os.path.join(output_dir, output_name), fps=config['fps'], codec='libx264',bitrate='2000k')

also going to explore skipping the gaussian blur (may be req by the alg)

    # img = cv2.GaussianBlur(img,(int(init_factor//2 * 2 + 1), int(init_factor//2 * 2 + 1)), 0)

from 3d-photo-inpainting.

victusfate avatar victusfate commented on May 16, 2024

ended up passing in the original image width to run_depth and using it vs 640.
testing it now to ensure the output resolution = the input.

After that I'll circle back to the output video quality

in main.py

    image_width = image.shape[1]
    run_depth(device,[sample['ref_img_fi']], config['src_folder'], config['depth_folder'],
              config['MiDaS_model_ckpt'], MonoDepthNet, MiDaS_utils, target_w=image_width)

in run.py run_depth

        scale = target_w / max(img.shape[0], img.shape[1])

from 3d-photo-inpainting.

turkeyphant avatar turkeyphant commented on May 16, 2024

ended up passing in the original image width to run_depth and using it vs 640.
testing it now to ensure the output resolution = the input.

After that I'll circle back to the output video quality

in main.py

    image_width = image.shape[1]
    run_depth(device,[sample['ref_img_fi']], config['src_folder'], config['depth_folder'],
              config['MiDaS_model_ckpt'], MonoDepthNet, MiDaS_utils, target_w=image_width)

in run.py run_depth

        scale = target_w / max(img.shape[0], img.shape[1])

This still seems to limit my output to a width of 960px. Did you made another other changes?

from 3d-photo-inpainting.

turkeyphant avatar turkeyphant commented on May 16, 2024

I've also changed the frac variable but now main.py won't complete:

running on device 0
  0% 0/1 [00:00<?, ?it/s]Current Source ==>  2mars
Running depth extraction at 1595581689.5904121
initialize
device: cpu
start processing
  processing image/2mars.jpg (1/1)
torch.Size([1, 3, 352, 384])
finished
Start Running 3D_Photo ...
Loading edge model at 1595581910.257932
Loading depth model at 1595581919.6054657
Loading rgb model at 1595581925.925136
Writing depth ply (and basically doing everything) at 1595581932.251731
^C

from 3d-photo-inpainting.

wux12 avatar wux12 commented on May 16, 2024

@victusfate I tried your way and the results are better! For others to follow: These are the steps I took,

  1. I hardcoded the frac in main.py to 1 . frac=1.. This make sure the input and output sizes are the same.

  2. image_width = image.shape[1]
    

    run_depth(device,[sample['ref_img_fi']], config['src_folder'], config['depth_folder'],
    config['MiDaS_model_ckpt'], MonoDepthNet, MiDaS_utils, target_w=image_width)

  3. run.py is changed as:
    scale = target_w / max(img.shape[0], img.shape[1])

  4. In mesh folder I uncommented the following: # img = cv2.GaussianBlur(img,(int(init_factor//2 * 2 + 1), int(init_factor//2 * 2 + 1)), 0)

  5. In mesh folder I also modified the output video setting
    clip.write_videofile(os.path.join(output_dir, output_name), fps=config['fps'], codec='libx264',bitrate='2000k')

  6. commented the resize code in main.py
    #image = cv2.resize(image, (config['output_w'], config['output_h']), interpolation=cv2.INTER_AREA)

PS., If I play with longer_side_len argument to much larger resolution (4500), the code would take for ever (I have one GPU). The RAM shoots up to over 64GB and then the code exit! lack of memory perhaps.. Is there any way I can work with high resolution image with a 64GB RAM or do I need much larger RAM.. If so, how big of a RAM do I need? This seems like a memory leakage for large files!

Another issue is that if you output the same size image in the video, some of the pixels would not have the depth info when you zoon in or circle around the object.. This does not happen when you resize the input image. However, if you output the same size as the input by folllowing the steps above, the output video effect would have "gray blocks" with no depth info! dont know why? any idea?

Hi, I tried your code, but the error is as follows, have you encountered this problem?
Traceback (most recent call last):
File "main.py", line 89, in
vis_photos, vis_depths = sparse_bilateral_filtering(depth.copy(), image.copy(), config, num_iter=config['sparse_iter'], spdb=False)
File "/home/w/3d-photo-inpainting/bilateral_filtering.py", line 31, in sparse_bilateral_filtering
vis_image[u_over > 0] = np.array([0, 0, 0])
IndexError: boolean index did not match indexed array along dimension 0; dimension is 2848 but corresponding boolean dimension is 425

from 3d-photo-inpainting.

psx2 avatar psx2 commented on May 16, 2024

6. #image = cv2.resize(image, (config['output_w'], config['output_h']), interpolation=cv2.INTER_AREA)

Ensure the following code has not been commented out in main.py.

image = cv2.resize(image, (config['output_w'], config['output_h']), interpolation=cv2.INTER_AREA)

This should fix your issue.

from 3d-photo-inpainting.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.