Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
646 views
in Technique[技术] by (71.8m points)

python - Location of stored offline data for cartopy

Where is offline data stored in cartopy? Is it stored in the data folder under site-packages? Is there any way to trigger the downloading of all available data? I would like to copy this over to a Linux machine that is not connected to the internet. I'm currently working from a Windows machine that is connected to the internet, so I'm hoping to download the data from there. Thanks.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Take a look at the config docs in http://scitools.org.uk/cartopy/docs/latest/cartopy.html. Essentially, the data is downloaded to the data_dir item in that config. For me that looks like:

>>> import cartopy.config        
>>> cartopy.config['data_dir']
'/home/pelson/.local/share/cartopy'

Of course, that can be configured to wherever you like. Because I also deploy cartopy to my organisation, we also configure cartopy to have centrally stored data. This is put in a location determined by the pre_existing_data_dir config item.

Finally, to batch download all of the data (which is several GB), there is a script in the cartopy source at tools/feature_download.py. Running it to download all of the data is simply:

$> python tools/feature_download.py

Full help is available:

$> python tools/feature_download.py --help
usage: feature_download.py [-h] [--output OUTPUT] [--dry-run] [--ignore-repo-data] GROUP_NAME [GROUP_NAME ...]

Download feature datasets.

positional arguments:
  GROUP_NAME            Feature group name: cultural-extra, cultural, gshhs, physical

optional arguments:
  -h, --help            show this help message and exit
  --output OUTPUT, -o OUTPUT
                        save datasets in the specified directory (default: user cache directory)
  --dry-run             just print the URLs to download
  --ignore-repo-data    ignore existing repo data when downloading

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...