Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
243 views
in Technique[技术] by (71.8m points)

Python Multiprocessing with Shared Variables to speed up an Expensive Computation

I understand this is a slightly vague and open ended question, but I need some help in this area as a quick Google/Stack Overflow search hasn't yielded useful information.

The basic idea is to use multiple processes to speed up an expensive computation that currently gets executed sequentially in a loop. The caveat being that I have 2 significant data structures that are accessed by the expensive function:

  • one data structure will be read by all processes but is not ever modified by a process (so could be copied to each process, assuming memory size isn't an issue, which, in this case, it isn't)
  • the other data structure will spend most of the time being read by processes, but will occasionally be written to, and this update needs to be propagated to all processes from that point onwards

Currently the program works very basically like so:

def do_all_the_things(self):
    read_only_obj = {...}
    read_write_obj = {...}

    output = []
    for i in range(4):
        for j in range(4):
            output.append(do_expensive_operation(read_only_obj, read_write_obj))

    return output

In a uniprocessor world, this is fine as any changes made to read_write_obj are accessed sequentially.

What I am looking to do is to run each instance of do_expensive_operation in a separate process so that a multiprocessor can be fully utilised.

The two things I am looking to understand are:

  1. How does the whole multiprocessing thing work. I have seen Queues and Pools and don't understand which I should be using in this situation?
  2. I have a feeling sharing memory (read_only_obj and read_write_obj) is going to be complicated. Is this possible? Advisable? And how do I go about it?

Thank you for your time!


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Disclamer: I will help you and will provide you with a working example but I am not an expert in this topic.

Point 1 has been answered here to some extent.
Point 2 has been answered here to some extent.

I used different options in the past for CPU-bound tasks in python and here is one toy example for you to follow:

from multiprocessing import Process, Queue
import time, random

def do_something(n_order, x, queue):
    time.sleep(5)
    queue.put((idx, x))


    def main():
    
        data = [1,2,3,4,5]
        queue = Queue()
        processes = [Process(target=do_something, args=(n,x,queue)) for n,x in enumerate(data)]
    
        for p in processes:
            p.start()
    
        for p in processes:
            p.join()
    
        unsorted_result = [queue.get() for _ in processes]
    
        result = [i[1] for i in sorted(unsorted_result)]
        print(result)

You can write the same but in a loop instead of using queues and check the time consumed (in this silly case is the sleep, for testing purposes) and you will realized that you shortened the time approximately by the number of processes that you run, as expected. In fact, this is the results in my computer for the exact script that I provide you with (first multiprocess and the second loop):

[1, 2, 3, 4, 5]

real    0m5.240s
user    0m0.397s
sys 0m0.260s

[1, 4, 9, 16, 25]

real    0m25.104s
user    0m0.051s
sys 0m0.030s

With respect to read_only or read and write objects, I will need more information to provide help. What type of objects are those? Are they indexed?


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...