Code Monkey home page Code Monkey logo

oricnn's Introduction

Lending Orientation to Neural Networks for Cross-view Geo-localization

This contains the ACT dataset and codes for training cross-view geo-localization method described in: Lending Orientation to Neural Networks for Cross-view Geo-localization, CVPR2019.

alt text

Abstract

This paper studies image-based geo-localization (IBL) problem using ground-to-aerial cross-view matching. The goal is to predict the spatial location of a ground-level query image by matching it to a large geotagged aerial image database (e.g., satellite imagery). This is a challenging task due to the drastic differences in their viewpoints and visual appearances. Existing deep learning methods for this problem have been focused on maximizing feature similarity between spatially closeby image pairs, while minimizing other images pairs which are far apart. They do so by deep feature embedding based on visual appearance in those ground-and-aerial images. However, in everyday life, humans commonly use orientation information as an important cue for the task of spatial localization. Inspired by this insight, this paper proposes a novel method which endows deep neural networks with the commonsense of orientation. Given a ground-level spherical panoramic image as query input (and a large geo-referenced satellite image database), we design a Siamese network which explicitly encodes the orientation (i.e., spherical directions) of each pixel of the images. Our method significantly boosts the discriminative power of the learned deep features, leading to a much higher recall and precision outperforming all previous methods. Our network is also more compact using only 1/5th number of parameters than a previously best-performing network. To evaluate the generalization of our method, we also created a large-scale cross-view localization benchmark containing 100K geotagged ground-aerial pairs covering a geographic area of 300 square miles.

ACT dataset

Our ACT dataset is targgeted for fine-grain and city-scale cross-view localization. The ground-view images are panoramas, and the overhead images are satellite images. ACT dataset densely cover the Canberra city, and a sample cross-view pair is depicted as below.

alt text

alt text

Our ACT dataset has two subsets (Contact me for the dataset, [email protected]):

  1. ACT_small. Small-scale dataset for training and validation. Note the number of training and validation cross-view image pairs are extractly the same as the CVUSA dataset

  2. ACT_test. Large-scale dataset for testing. Note the number of testing cross-view image pairs are 10x bigger than CVUSA dataset

To download the dataset, I would suggest using wget. For example: wget --continue --progress=dot:mega --tries=0 THE_LINK_I_SEND_YOU

The suffix of downloaded zip file is tar.gz

If you fail to extract the compressed files on Ubuntu, a convenient way to solve the problem is using WinRAR on a Windows PC

Note that the dataset is ONLY permitted to be used for research. Don't distribute.

Codes and Models

Overview

Our model is implemented in Tensorflow 1.4.0. Other tensorflow versions should be OK. All our models are trained from scratch, so please run the training codes to obtain models.

For pre-trained model on CVUSA dataset, please download CVUSA_model

For pre-trained model on CVACT dataset, please download CVACT_model

In the above CVUSA_model and CVACT_model, we also include the pre-extracted feature embeddings, in case you want to directly use them.

Some may want to know how the training preformance improves along with epoches, please refer to recalls_epoches_CVUSA and recalls_epoches_CVACT.

Some may want to know how the cross-view orientations are defined, please refer to ground_view_orientations and satellite_view_orientations

Codes for CVUSA dataset

If you want to use CVUSA dataset, first download it, and then modify the img_root variable in input_data_rgb_ori_m1_1_augument.py (line 12)

For example:

img_root = '..../..../CVUSA/'

For training, run:

python train_deep6_scratch_m1_1_concat3conv_rgb_ori_gem_augment.py

For testing, run:

python eval_deep6_scratch_m1_1_concat3conv_rgb_ori_gem_augment.py

Recall@1% is automatically calculated after running the evaluation script, and is saved to PreTrainModel folder.

To calculate the recall@N figures, you need to use the extracted feature embeddings, and run the matlab script RecallN.m. You also need to change the path (variable desc_path) to your descriptor file.

Codes for CVACT dataset

Most of the steps for ACT dataset are the same as CVUSA dataset. The differences are:

  1. input_data_ACT.py is used in train_deep6_scratch_m1_1_concat3conv_rgb_ori_gem_ACT.py to train CNNs. It uses the ACT_small dataset for fast training.

  2. To test Geo-localization performances on ACT_test dataset, you need to use input_data_ACT_test.py in the evaluation script eval_deep6_scratch_m1_1_concat3conv_rgb_ori_gem_ACT.py.

That is to say: change the first line to

from input_data_ACT_test import InputData
  1. To test Geo-localization performances on ACT_test dataset, run the matlab script RecallGeo.m. You also need to change the path (variable desc_path) to your descriptor file.

Publication

If you like, you can cite our following publication:

Liu Liu; Hongdong Li. Lending Orientation to Neural Networks for Cross-view Geo-localization. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.

@InProceedings{Liu_2019_CVPR, author = {Liu, Liu and Li, Hongdong}, title = {Lending Orientation to Neural Networks for Cross-view Geo-localization}, booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2019} }

and also the following prior works:

  1. Sixing Hu, Mengdan Feng, Rang M. H. Nguyen, Gim Hee Lee. CVM-Net: Cross-View Matching Network for Image-Based Ground-to-Aerial Geo-Localization. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.

  2. Zhai, Menghua, et al. "Predicting ground-level scene layout from aerial imagery." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017.

  3. Isola, Phillip, et al. "Image-to-image translation with conditional adversarial networks." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.

Contact

If you have any questions, drop me an email ([email protected])

oricnn's People

Contributors

liumouliu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

oricnn's Issues

Request the CVACT DataSet

Dear Sir Or madam
Hello, my name is Zhao Hu, a graduate student who just learned cross-perspective image positioning. I have learned your paper recently, and I think it is very helpful to me. I hope you can provide me with the link of CVACT data set.
Looking forward to your reply.

about dataset

Hi,Liu!
Looking forward to your reply either, I have sent you the e-mail. Thanks!

dataset details

Hi; I recently received your dataset and the training set is of size 70GB and the testing is 109GB. I just wanted to confirm that these file are correct. They seem very big
Thank you for sharing this dataset

CVUSA ground image orientation

Hi,

I'm trying to replicate your experiment on the CVUSA dataset, are the ground view panorama always oriented to the north in this dataset ? It's what I understand from your code as batch_grd_yawpitch[i,:,:,0] seems to be the same for every pictures, but I couldn't find an external source confirming it.

Thank you

Usage in mobile application?

Hello
Interesting technique!
I am keep my eyes out for alternatives other than magnetometer in my navigation app. Do you think using the model can out-preform the usage of magnetometer in some areas. I understood the azimuth to be connected to the compass heading?

Request for CVACT Dataset

Dear Sir or Madam
Hello, my name is Wang wei, a graduate student who just learned cross-view image localization. I have learned your paper recently, and I think it is very helpful to me. I hope you can provide me with the link of CVACT dataset.
I look forward to your response and hope for an opportunity to further interact with you in the future.

What is the ground resolution?

Hey, thanks for your great work!

The paper specifies the ground resolution to be 0.12 meters per pixel at zoom-level 20, so at image resolution 1200x1200 the images would cover an area of 144x144m^2. For example the image pXO99cAVXvovXTUqlcCO1A_satView_polish.jpg:

A screenshot from the same area in Google Maps with scale in meters and the CVACT image roughly aligned by hand shows that this image covers an area of around 68x68m^2.

That would mean the ground resolution is about 5.7cm/pixel.

Maybe I missed something? Maybe the zoom level is 21, which would put the ground resolution at ~6cm/pixel, so roughly what Google Maps shows?

Thanks for you help :)

Check MD5 code of CVACT Dataset

I wanted to bring to your attention an issue I encountered while attempting to extract the dataset from OneDrive. Specifically, when trying to uncompress the ANU_data_test.tar.gz file (108GB) by tar -zxvf, I encountered the following error:
首先感谢您的工作!我在使用tar -zxvf解压 ANU_data_test.tar.gz文件(108GB)时会报错:

gzip: stdin: invalid compressed data -- format violated
tar: Unexpected EOF in archive
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now

I got streetview 88181 photos and satview_polish 88074 photos. The solutions on google doesn't work.
Upon further investigation, I used the "md5sum" command in the terminal to verify the files, and here are the results:
谷歌之后我认为应该是文件损坏,然后我使用”md5sum“命令查看了两个文件的md5码,

$ md5sum ANU_data_test.tar.gz
ebde964bd41e0addb1b9a00bba3af86f ANU_data_test.tar.gz

$ md5sum ANU_data_small.tar.gz
1a1a87750549e89edfdb3cc3bfa4f99b ANU_data_small.tar.gz

After downloading the ANU_data_test.tar.gz file four times to ensure it's not a network transmission issue, I hope you can verify the MD5 checksums and include them in the reply. This will ensure that researchers encounter issues due to network or transmission problems.
当我下载4次ANU_data_test.tar.gz文件,确保应该不是网络传输问题之后:),希望您可以测得这两个md5码并且放在回复的邮件中,以确保后续的研究者不会因为网络或者传输问题而使用损坏的压缩文件。

Thank you for your attention to this issue.
最后再次感谢您的工作,并且期待您的回复。

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.