Create the AtLoc Conda environment: conda env create -f environment.yml.
Activate the environment: conda activate py27pt04.
Note that our code has been tested with PyTorch v0.4.1 (the environment.yml file should take care of installing the appropriate version).
Data
We support the 7Scenes and Oxford RobotCar datasets right now. You can also write your own PyTorch dataloader for other datasets and put it in the data directory.
Special instructions for RobotCar:
Download this fork of the dataset SDK, and run cd data && ./robotcar_symlinks.sh after editing the ROBOTCAR_SDK_ROOT variable in it appropriately.
For each sequence, you need to download the stereo_centre, vo and gps tar files from the dataset website. The directory for each 'scene' (e.g. loop) has .txt files defining the train/test_split.
To make training faster, we pre-processed the images using data/process_robotcar.py. This script undistorts the images using the camera models provided by the dataset, and scales them such that the shortest side is 256 pixels.
Pixel and Pose statistics must be calculated before any training. Use the data/dataset_mean.py, which also saves the information at the proper location. We provide pre-computed values for RobotCar and 7Scenes.
The meanings of various command-line parameters are documented in train.py. The values of various hyperparameters are defined in tools/options.py.
Inference
The trained models for partial experiments presented in the paper can be downloaded here. The inference script is eval.py. Here are some examples, assuming the models are downloaded in logs.
If you find this code useful for your research, please cite our paper
@article{wang2019atloc,
title={AtLoc: Attention Guided Camera Localization},
author={Wang, Bing and Chen, Changhao and Lu, Chris Xiaoxuan and Zhao, Peijun and Trigoni, Niki and Markham, Andrew},
journal={arXiv preprint arXiv:1909.03557},
year={2019}
}
请发表评论