I have a large array of custom objects which I need to perform independent (parallelizable) tasks on, including modifying object parameters. I've tried using both a Manager().dict, and 'sharedmem'ory, but neither is working. For example:
import numpy as np
import multiprocessing as mp
import sharedmem as shm
class Tester:
num = 0.0
name = 'none'
def __init__(self,tnum=num, tname=name):
self.num = tnum
self.name = tname
def __str__(self):
return '%f %s' % (self.num, self.name)
def mod(test, nn):
test.num = np.random.randn()
test.name = nn
if __name__ == '__main__':
num = 10
tests = np.empty(num, dtype=object)
for it in range(num):
tests[it] = Tester(tnum=it*1.0)
sh_tests = shm.empty(num, dtype=object)
for it in range(num):
sh_tests[it] = tests[it]
print sh_tests[it]
print '
'
workers = [ mp.Process(target=mod, args=(test, 'some') ) for test in sh_tests ]
for work in workers: work.start()
for work in workers: work.join()
for test in sh_tests: print test
prints out:
0.000000 none
1.000000 none
2.000000 none
3.000000 none
4.000000 none
5.000000 none
6.000000 none
7.000000 none
8.000000 none
9.000000 none
0.000000 none
1.000000 none
2.000000 none
3.000000 none
4.000000 none
5.000000 none
6.000000 none
7.000000 none
8.000000 none
9.000000 none
I.e. the objects aren't modified.
How can I achieve the desired behavior?
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…