Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
1.3k views
in Technique[技术] by (71.8m points)

why can't I get the right sum of 1D array with numba (cuda python)?

I try to use cuda python with numba. The code is to calculate the sum of a 1D array as follows, but I don't know how to get one value result rather than three values.

python3.5 with numba + CUDA8.0

import os,sys,time
import pandas as pd
import numpy as np
from numba import cuda, float32

os.environ['NUMBAPRO_NVVM']=r'D:NVIDIA GPU Computing ToolkitCUDAv8.0
vvmin
vvm64_31_0.dll'
os.environ['NUMBAPRO_LIBDEVICE']=r'D:NVIDIA GPU Computing ToolkitCUDAv8.0
vvmlibdevice'

bpg = (1,1) 
tpb = (1,3) 

@cuda.jit
def calcu_sum(D,T):
    ty = cuda.threadIdx.y
    bh = cuda.blockDim.y
    index_i = ty
    L = len(D)
    su = 0
    while index_i<L:
        su +=D[index_i]
        index_i +=bh
    print('su:',su)
    T[0,0]=su
    print('T:',T[0,0])


D = np.array([ 0.42487645,0.41607881,0.42027071,0.43751907,0.43512794,0.43656972,
               0.43940639,0.43864551,0.43447691,0.43120232], dtype=np.float32)
T = np.empty([1,1])
print('D: ',D)

stream = cuda.stream()
with stream.auto_synchronize():
    dD = cuda.to_device(D, stream)
    dT= cuda.to_device(TE, stream)
    calcu_sum[bpg, tpb, stream](dD,dT)

The output is:

D:  [ 0.42487645  0.41607881  0.42027071  0.43751907  0.43512794  0.43656972
  0.43940639  0.43864551  0.43447691  0.43120232]
su:  1.733004
su:  1.289852
su:  1.291317
T: 1.733004
T: 1.289852
T: 1.291317

Why can't I get the output "4.31417383" rather than "1.733004 1.289852 1.291317" ? 1.733004+1.289852+1.291317=4.314173.

I'm new to numba, read the numba documentation, but don't know how to do it. Can someone give advice ?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

The reason you don't get the sum you expect is because you haven't written code to produce that sum.

The basic CUDA programming model (whether you use CUDA C, Fortran or Python as your language) is that you write kernel code which is executed by each thread. You have written code for each thread to read and sum part of the input array. You have not written any code for those threads to share and sum their individual partial sums into a final sum.

There is an extremely well described algorithm for doing this -- it is called a parallel reduction. You can find an introduction to the algorithm in a PDF which ships in the examples of every version of the CUDA toolkit, or download a presentation about it here. You can also read a more modern version of the algorithm which uses newer features of CUDA (warp shuffle instructions and atomic transactions) here.

After you have studied the reduction algorithm, you will need to adapt the standard CUDA C kernel code into the Numba Python kernel dialect. At the bare minimum, something like this:

tpb = (1,3) 

@cuda.jit
def calcu_sum(D,T):

    ty = cuda.threadIdx.y
    bh = cuda.blockDim.y
    index_i = ty
    sbuf = cuda.shared.array(tpb, float32)

    L = len(D)
    su = 0
    while index_i < L:
        su += D[index_i]
        index_i +=bh

    print('su:',su)

    sbuf[0,ty] = su
    cuda.syncthreads()

    if ty == 0:
        T[0,0] = 0
        for i in range(0, bh):
            T[0,0] += sbuf[0,i]
        print('T:',T[0,0])

will probably do what you want, although it is still a long way from an optimal parallel shared memory reduction, as you will see when you read the material I provided links to.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

1.4m articles

1.4m replys

5 comments

57.0k users

...