• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Liumouliu/OriCNN: Lending Orientation to Neural Networks for Cross-view Geo-loca ...

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称(OpenSource Name):

Liumouliu/OriCNN

开源软件地址(OpenSource Url):

https://github.com/Liumouliu/OriCNN

开源编程语言(OpenSource Language):

Python 96.4%

开源软件介绍(OpenSource Introduction):

Lending Orientation to Neural Networks for Cross-view Geo-localization

This contains the ACT dataset and codes for training cross-view geo-localization method described in: Lending Orientation to Neural Networks for Cross-view Geo-localization, CVPR2019.

alt text

Abstract

This paper studies image-based geo-localization (IBL) problem using ground-to-aerial cross-view matching. The goal is to predict the spatial location of a ground-level query image by matching it to a large geotagged aerial image database (e.g., satellite imagery). This is a challenging task due to the drastic differences in their viewpoints and visual appearances. Existing deep learning methods for this problem have been focused on maximizing feature similarity between spatially closeby image pairs, while minimizing other images pairs which are far apart. They do so by deep feature embedding based on visual appearance in those ground-and-aerial images. However, in everyday life, humans commonly use orientation information as an important cue for the task of spatial localization. Inspired by this insight, this paper proposes a novel method which endows deep neural networks with the commonsense of orientation. Given a ground-level spherical panoramic image as query input (and a large geo-referenced satellite image database), we design a Siamese network which explicitly encodes the orientation (i.e., spherical directions) of each pixel of the images. Our method significantly boosts the discriminative power of the learned deep features, leading to a much higher recall and precision outperforming all previous methods. Our network is also more compact using only 1/5th number of parameters than a previously best-performing network. To evaluate the generalization of our method, we also created a large-scale cross-view localization benchmark containing 100K geotagged ground-aerial pairs covering a geographic area of 300 square miles.

ACT dataset

Our ACT dataset is targgeted for fine-grain and city-scale cross-view localization. The ground-view images are panoramas, and the overhead images are satellite images. ACT dataset densely cover the Canberra city, and a sample cross-view pair is depicted as below.

alt text

alt text

Our ACT dataset has two subsets (Contact me for the dataset, Liu.Liu@anu.edu.au):

  1. ACT_small. Small-scale dataset for training and validation. Note the number of training and validation cross-view image pairs are extractly the same as the CVUSA dataset

  2. ACT_test. Large-scale dataset for testing. Note the number of testing cross-view image pairs are 10x bigger than CVUSA dataset

To download the dataset, I would suggest using wget. For example: wget --continue --progress=dot:mega --tries=0 THE_LINK_I_SEND_YOU

The suffix of downloaded zip file is tar.gz

If you fail to extract the compressed files on Ubuntu, a convenient way to solve the problem is using WinRAR on a Windows PC

Note that the dataset is ONLY permitted to be used for research. Don't distribute.

Codes and Models

Overview

Our model is implemented in Tensorflow 1.4.0. Other tensorflow versions should be OK. All our models are trained from scratch, so please run the training codes to obtain models.

For pre-trained model on CVUSA dataset, please download CVUSA_model

For pre-trained model on CVACT dataset, please download CVACT_model

In the above CVUSA_model and CVACT_model, we also include the pre-extracted feature embeddings, in case you want to directly use them.

Some may want to know how the training preformance improves along with epoches, please refer to recalls_epoches_CVUSA and recalls_epoches_CVACT.

Some may want to know how the cross-view orientations are defined, please refer to ground_view_orientations and satellite_view_orientations

Codes for CVUSA dataset

If you want to use CVUSA dataset, first download it, and then modify the img_root variable in input_data_rgb_ori_m1_1_augument.py (line 12)

For example:

img_root = '..../..../CVUSA/'

For training, run:

python train_deep6_scratch_m1_1_concat3conv_rgb_ori_gem_augment.py

For testing, run:

python eval_deep6_scratch_m1_1_concat3conv_rgb_ori_gem_augment.py

Recall@1% is automatically calculated after running the evaluation script, and is saved to PreTrainModel folder.

To calculate the recall@N figures, you need to use the extracted feature embeddings, and run the matlab script RecallN.m. You also need to change the path (variable desc_path) to your descriptor file.

Codes for CVACT dataset

Most of the steps for ACT dataset are the same as CVUSA dataset. The differences are:

  1. input_data_ACT.py is used in train_deep6_scratch_m1_1_concat3conv_rgb_ori_gem_ACT.py to train CNNs. It uses the ACT_small dataset for fast training.

  2. To test Geo-localization performances on ACT_test dataset, you need to use input_data_ACT_test.py in the evaluation script eval_deep6_scratch_m1_1_concat3conv_rgb_ori_gem_ACT.py.

That is to say: change the first line to

from input_data_ACT_test import InputData
  1. To test Geo-localization performances on ACT_test dataset, run the matlab script RecallGeo.m. You also need to change the path (variable desc_path) to your descriptor file.

Publication

If you like, you can cite our following publication:

Liu Liu; Hongdong Li. Lending Orientation to Neural Networks for Cross-view Geo-localization. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.

@InProceedings{Liu_2019_CVPR, author = {Liu, Liu and Li, Hongdong}, title = {Lending Orientation to Neural Networks for Cross-view Geo-localization}, booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2019} }

and also the following prior works:

  1. Sixing Hu, Mengdan Feng, Rang M. H. Nguyen, Gim Hee Lee. CVM-Net: Cross-View Matching Network for Image-Based Ground-to-Aerial Geo-Localization. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.

  2. Zhai, Menghua, et al. "Predicting ground-level scene layout from aerial imagery." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017.

  3. Isola, Phillip, et al. "Image-to-image translation with conditional adversarial networks." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.

Contact

If you have any questions, drop me an email (u1013337@anu.edu.au)




鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap