Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
233 views
in Technique[技术] by (71.8m points)

Python 2.7: import performance

currently, I am importing bunch of .py files scattered across the file system via:

def do_import(name):
    import imp

    fp, pathname, description = imp.find_module(name)
    with fp:
        return imp.load_module(name, fp, pathname, description)

known_py_files = ['workingDir/import_one.py', 'anotherDir/import_two.py'] # and so forth
for py_file in known_py_files:
    do_import(py_file)

when I've timed the .py files as such below, they are in the magnitude of e-5 and e-6.

import_one.py

import time

import_stime = time.time()
import_dur = time.time() - import_stime
print import_dur

However, the call to do_import() is in the magnitude of e-3. I am guessing because of the overhead of importing it.

This is a problematic for me because im importing lots of files serially and the time to import adds up.

Is there a faster way to import than the approach mentioned above?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

If all of the files are under one directory, you can create an __init__.py empty file in each level of the nested directory, and import the name of the root directory, like import root in the following example:

/ root
    - __init__.py
    / workingDir
        - __init__.py
        - import_one.py
    / anotherDir
        - __init__.py
        - import_two.py

That structure is called "package".


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...