Code Monkey home page Code Monkey logo

depthnet's Introduction

DepthNet training on Still Box

This code can replicate the results of our paper that was published in UAVg-17. If you use this repo in your work, please cite us with the following bibtex :

@Article{isprs-annals-IV-2-W3-67-2017,
AUTHOR = {Pinard, C. and Chevalley, L. and Manzanera, A. and Filliat, D.},
TITLE = {END-TO-END DEPTH FROM MOTION WITH STABILIZED MONOCULAR VIDEOS},
JOURNAL = {ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences},
VOLUME = {IV-2/W3},
YEAR = {2017},
PAGES = {67--74},
URL = {https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/IV-2-W3/67/2017/},
DOI = {10.5194/isprs-annals-IV-2-W3-67-2017}
}

depthnet

End-to-end depth from motion with stabilized monocular videos

  • This code shows how the only translational movement of the camera can be leveraged to compute a very precise depth map, even at more than 300 times the displacement.
  • Thus, for a camera movement of 30cm (nominal displacement used here), you can see as far as 100m.

See our second paper for information about using this code on real videos with speed estimation

Multi range Real-time depth inference from a monocular stabilized footage using a Fully Convolutional Neural Network

Click Below for video

youtube video

DepthNet

DepthNet is a network designed to infer Depth Map directly from a pair of stabilized image.

  • No information is given about movement direction
  • DepthNet is Fully Convolutional, which means it is completely robust to optical center fault
  • This network only works for pinhole-like pictures

Still Box

stillbox

Still box is a dataset created specifically for supervised training of depth map inference for stabilized aerial footage. It tries to mimic typical drone footages in static scenes, and depth is impossible to infer from a single image, as shapes get all kinds of sizes and positions.

  • You can download it here
  • The dataset webpage also provides a tutorial on how to read the data

Training

Requirements

[sudo] pip3 install -r requirements.txt

If you want to log some outputs from the validation set with the --log-output option, you need openCV python bindings to convert depth to RGB with a rainbow colormap.

If you don't have opencv, grayscales will be logged

Usage

Best results can be obtained by training on still box 64 and then finetuned successively up to the resolution you target. Here are the parameters used for the paper (please note how learning rate and batch size are changed, training was done a single GTX 980Ti).

python3 train.py -j8 --lr 0.01 /path/to/still_box/64/ --log-output --activation-function elu --bn
python3 train.py -j8 --lr 0.01 /path/to/still_box/128/ --log-output --activation-function elu --bn --pretrained /path/to/DepthNet64
python3 train.py -j8 --lr 0.001 /path/to/still_box/256/ --log-output --activation-function elu --bn -b64 --pretrained /path/to/DepthNet128
python3 train.py -j8 --lr 0.001 /path/to/still_box/512/ --log-output --activation-function elu --bn -b16 --pretrained /path/to/DepthNet256

Note: You can skip 128 and 256 training if you don't have time, results will be only slightly worse. However, you need to do 64 training first as stated by our first paper. This might has something to do with either the size of 64 dataset (in terms of scene numbers) or the fact that feature maps are reduced down to 1x1 making last convolution a FC equivalent operation

Pretrained networks

Best results were obtained with elu for depth activation (not mentionned in the original paper), along with BatchNorm.

Name training set Error (m)
DepthNet_elu_bn_64.pth.tar 64 4.65 Link
DepthNet_elu_bn_128.pth.tar 128 3.08 Link
DepthNet_elu_bn_256.pth.tar 256 2.29 Link
DepthNet_elu_bn_512.pth.tar 512 1.97 Link

All the networks have the same size and same structure.

Custom FOV and focal length

Every image in still box is 90° of FOV (field of view), focal length (in pixels) is then respectively

  • 32px for 64x64 images
  • 64px for 128x128 images
  • 128px for 128x128 images
  • 256px for 512x512 images

Training is not flexible to focal length, and for a custom focal length you will have to run a dedicated training.

If you need to use a custom focal length and FOV you can simply resize the pictures and crop them.

Say you have a picture of width w with an associated FOV fov. To get equivalent from one of the datasets you can first crop the still box pictures so that FOV will match fov (cropping doesn't affect focal length in pixels), and then resize it to w. Note that DepthNet can take rectangular pictures as input.

cropped_w = w/tan(pi*fov/360)

we naturally recommend to do this operation offline, metadata from metadata.json won't need to be altered.

with pretrained DepthNet

If you can resize your test pictures, thanks to its fully convolutional architecture, DepthNet is flexible to fov, as long as it stays below 90° (or max FOV encountered during training). Referring back to our witdh w and FOV fov we get with a network trained with a particular focal length f the following width to resize to:

resized_w = f/2*tan(pi*fov/360)

That way, you won't have to make a dedicated training or even download the still box dataset


/!\ These equations are only valid with pinhole equivalent cameras. Be sure to correct distortion before using DepthNet

Testing Inference

The run_inference.py lets you run an inference on a folder of images, and save the depth maps in different visualizations.

A simple still box scene of 512x512 pictures for testing can be downloaded here. Otherwise, any folder with a list of jpg images will do, provided you follow the guidelines above.

python3 run_inference.py --output-depth --no-resize --dataset-dir /path/to/stub_box --pretrained /path/to/DepthNet512 --frame-shift 3 --output-dir /path/to/save/outputs

Visualise training

Training can be visualized via tensorboard by launching this command in another terminal

tensorboard --logdir=/path/to/DepthNet/Results

You can then access the board from any computer in the local network by accessing machine_ip:6006 from a web browser, just as a regular tensorboard server. More info here

depthnet's People

Contributors

clementpinard avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

depthnet's Issues

Can't download the still box torrent?

any other means?

Joe

magnet:?xt=urn:btih:4d3a60ad3c9ceac7662735ba8e90fb467b43a3aa&tr=http%3A%2F%2Facademictorrents.com%2Fannounce.php&tr=udp%3A%2F%2Ftracker.publicbt.com%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.openbittorrent.com%3A80%2Fannounce

Pretrained weights

I have tried to download your pretrained weights but the link is broken.

How can I download those files?

Thank you

Error on loading pretrained weights

I am getting an error on the following.

depthnet = DepthNet()
weights = torch.load('DepthNet_elu_bn_512.pth.tar')
depthnet.load_state_dict(weights['state_dict'], strict=False)

RuntimeError: While copying the parameter named conv1.0.weight, whose dimensions in the model are torch.Size([32, 6, 3, 3]) and whose dimensions in the checkpoint are torch.Size([32, 8, 3, 3]).

It seems that the DepthNet512 had 8 channels in the first conv layer, which are not reflected in the model class. Is that so?

How to use more than one graphic cards?

Based on the nvidia-smi, only one graphic card is used when I run

python3 train.py -j8 --lr 0.01 /path/to/still_box/64/ --log-output --activation-function elu --bn

My question is how can I use another graphic card, say I got two 1080Tl

Joe

jc@marvel-001:~$ nvidia-smi
Sun Sep 2 16:26:40 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 396.44 Driver Version: 396.44 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 1080 Off | 00000000:04:00.0 Off | N/A |
| 29% 40C P8 7W / 180W | 2892MiB / 8119MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX 1080 Off | 00000000:08:00.0 Off | N/A |
| 27% 30C P8 6W / 180W | 10MiB / 8119MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 2 GeForce GTX 1080 Off | 00000000:09:00.0 Off | N/A |
| 27% 27C P8 6W / 180W | 10MiB / 8119MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 3 GeForce GTX 1080 Off | 00000000:83:00.0 Off | N/A |
| 27% 29C P8 6W / 180W | 10MiB / 8119MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 4 GeForce GTX 1080 Off | 00000000:84:00.0 Off | N/A |
| 27% 31C P8 6W / 180W | 10MiB / 8119MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 5 GeForce GTX 1080 Off | 00000000:88:00.0 Off | N/A |
| 27% 29C P8 6W / 180W | 10MiB / 8119MiB | 0% Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 9275 C python3 497MiB |
| 0 11041 C python3 2383MiB |
+-----------------------------------------------------------------------------+

ELU+1 Activation.

Hi, thank you very much for sharing your work. I was wondering if you can explain a bit why are you using ELU+1. And if you have some thoughts about why is performing better.

[bug] add_image

Pls help

train.py:222: UserWarning: invalid index of a 0-dim tensor. This will be an error in PyTorch 0.5. Use tensor.item() to convert a 0-dim tensor to a Python number
losses.update(loss.data[0], target.size(0))
train.py:223: UserWarning: invalid index of a 0-dim tensor. This will be an error in PyTorch 0.5. Use tensor.item() to convert a 0-dim tensor to a Python number
train_writer.add_scalar('train_loss', loss.data[0], n_iter)
train.py:225: UserWarning: invalid index of a 0-dim tensor. This will be an error in PyTorch 0.5. Use tensor.item() to convert a 0-dim tensor to a Python number
depth2_metric_errors.update(depth2_metric_error.data[0], target.size(0))

  • Avg Loss : 8.032, Avg Depth error : 16.633, normalized : 1.154
    100% (2813 of 2813) |########################################| Elapsed Time: 5:35:49 ETA: 00:00:00
    dfdepth2_normalized_errors.update(depth2_norm_error.data[0], target.size(0))
    train.py:239: UserWarning: invalid index of a 0-dim tensor. This will be an error in PyTorch 0.5. UsN/A% (0 of 219) | | Elapsed Time: 0:00:00 ETA: --:--:--
    writer.writerow([loss.data[0], depth2_metric_error.data[0]])
    train.py:269: UserWarning: volatile was removed and now has no effect. Use with torch.no_grad(): instead.
    input_var = torch.autograd.Variable(torch.cat(input_tensors, 1), volatile=True)
    train.py:270: UserWarning: volatile was removed and now has no effect. Use with torch.no_grad(): instead.
    target_var = torch.autograd.Variable(target, volatile=True)
    Traceback (most recent call last):
    File "/home/jc/.local/lib/python3.6/site-packages/PIL/Image.py", line 2460, in fromarray
    mode, rawmode = _fromarray_typemap[typekey]
    KeyError: ((1, 1, 64), '|u1')

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "train.py", line 301, in
main()
File "train.py", line 158, in main
depth_error, normalized = validate(val_loader, model, epoch, term_logger, output_writers)
File "train.py", line 276, in validate
output_writers[i].add_image('GroundTruth', util.tensor2array(target[0].cpu(), max_value=100), 0)
File "/usr/local/lib/python3.6/dist-packages/tensorboardX/writer.py", line 412, in add_image

File "/usr/local/lib/python3.6/dist-packages/tensorboardX/summary.py", line 205, in image
image = make_image(tensor, rescale=rescale)

image = Image.fromarray(tensor)

File "/home/jc/.local/lib/python3.6/site-packages/PIL/Image.py", line 2463, in fromarray

TypeError: Cannot handle this data type

Still box dataset

great job! I tried to download still box dataset through torrent but it is really slow. Do you a faster solution for me to download it? Or just a part of it (like only the 512 set). Thank you so much!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.