Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
269 views
in Technique[技术] by (71.8m points)

python - numpy faster than numba and cython , how to improve numba code

I have a simple example here to help me understand using numba and cython. I am `new to both numba and cython. I've tried my best with to incorporate all the tricks to make numba fast and to some extent, the same for cython but my numpy code is almost 2x faster than numba (for float64), more than 2x faster if using float32. Not sure what I am missing here.

I was thinking perhaps the problem isn't coding anymore but more about compiler and such which I'm not very familiar with.

I've gone thru a lot of stackoverflow post about numpy, numba and cython and found no straight answers.

numpy version:

def py_expsum(x):
    return np.sum( np.exp(x) )

numba version:

@numba.jit( nopython=True)    
def nb_expsum(x):
    nx, ny = x.shape
    val = 0.0
    for ix in range(nx):
        for iy in range(ny):
            val += np.exp(x[ix, iy])
    return val

Cython version:

import numpy as np
import cython
from libc.math cimport exp

@cython.boundscheck(False) 
@cython.wraparound(False)
cpdef double cy_expsum2 ( double[:,:] x, int nx, int ny ):
    cdef: 
        double val = 0.0
        int ix, iy    
    for ix in range(nx):
        for iy in range(ny):
            val += exp(x[ix, iy])
    return val

play with array of size 2000 x 1000 and loop over 100 times. For numba, the first time it's activated is not counted in the loop.

Using python 3 (anaconda distribution), window 10

               float64       /   float32
    1. numpy : 0.56 sec      /   0.23 sec
    2. numba : 0.93 sec      /   0.74 sec      
    3. cython: 0.83 sec

cython is close to numba. So the big question for me is why can't the numba beat the numpy's runtime? What did I do wrong or missing here ? How can other factors contribute and how do I find out ?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

As we will see the behavior is dependent on which numpy-distribution is used.

This answer will focus on Anacoda-distribution with Intel's VML (vector math library), millage can vary given another hardware and numpy-version.

It will also be shown, how VML can be utilized via Cython or numexpr, in case one doesn't use Anacoda-distribution, which plugs-in VML under the hood for some numpy-operations.


I can reproduce your results, for the following dimensions

N,M=2*10**4, 10**3
a=np.random.rand(N, M)

I get:

%timeit py_expsum(a)  #   87ms
%timeit nb_expsum(a)  #  672ms
%timeit nb_expsum2(a)  #  412ms

The lion's share (about 90%) of calculation-time is used for evaluation of exp- function, and as we will see, it is a CPU-intensive task.

Quick glance at the top-statistics show, that numpy's version is executed parallized, but this is not the case for numba. However, on my VM with only two processors the parallelization alone cannot explain the huge difference of factor 7 (as shown by DavidW's version nb_expsum2).

Profiling the code via perf for both versions shows the following:

nb_expsum

Overhead  Command  Shared Object                                      Symbol                                                             
  62,56%  python   libm-2.23.so                                       [.] __ieee754_exp_avx
  16,16%  python   libm-2.23.so                                       [.] __GI___exp
   5,25%  python   perf-28936.map                                     [.] 0x00007f1658d53213
   2,21%  python   mtrand.cpython-37m-x86_64-linux-gnu.so             [.] rk_random

py_expsum

  31,84%  python   libmkl_vml_avx.so                                  [.] mkl_vml_kernel_dExp_E9HAynn                                   ?
   9,47%  python   libiomp5.so                                        [.] _INTERNAL_25_______src_kmp_barrier_cpp_38a91946::__kmp_wait_te?
   6,21%  python   [unknown]                                          [k] 0xffffffff8140290c                                            ?
   5,27%  python   mtrand.cpython-37m-x86_64-linux-gnu.so             [.] rk_random  

As one can see: numpy uses Intel's parallized vectorized mkl/vml-version under the hood, which easily outperforms the version from the gnu-math-library (lm.so) used by numba (or by parallel version of numba or by cython for that matter). One could level the ground a little bit by using the parallization, but still mkl's vectorized version would outperform numba and cython.

However, seeing performance only for one size isn't very enlightening and in case of exp (as for other transcendental function) there are 2 dimensions to consider:

  • number of elements in the array - cache effects and different algorithms for different sizes (not unheard of in numpy) can leads to different performances.
  • depending on the x-value, different times are needed to calculate exp(x). Normally there are three different types of input leading to different calculation times: very small, normal and very big (with non-finite results)

I'm using perfplot to visualize the result (see code in appendix). For "normal" range we get the following performaces:

enter image description here

and while the performance for 0.0 is similar, we can see, that Intel's VML gets quite a negative impact as soon as results becomes infinite:

enter image description here

However there are other things to observe:

  • For vector sizes <= 8192 = 2^13 numpy uses non-parallelized glibc-version of exp (the same numba and cython are using as well).
  • Anaconda-distribution, which I use, overrides numpy's functionality and plugs Intel's VML-library for sizes > 8192, which is vectorized and parallelized - this explains the drop in running times for sizes about 10^4.
  • numba beats the usual glibc-version easily (too much overhead for numpy) for smaller sizes, but there would be (if numpy would not switch to VML) not much difference for bigger array.
  • It seems to be a CPU-bound task - we cannot see cache-boundaries anywhere.
  • Parallized numba-version makes only sense if there are more than 500 elements.

So what are the consequences?

  1. If there are not more than 8192 elements, numba-version should be used.
  2. otherwise the numpy version (even if there is no VML-plugin avaible it will not lose much).

NB: numba cannot automaticaly use vdExp from Intel's VML (as partly suggested in comments), because it calculates exp(x) individually, while VML operates on a whole array.


One could reduce cache misses when writing and loading data, which is performed by the numpy-version using the following algorithm:

  1. Perform VML's vdExp on a part of the data which fits the cache, but which is also not too small (overhead).
  2. Sum up the resulting working array.
  3. Perform 1.+2. for next part of the data, until the whole data is processed.

However, I would not expect to gain more than 10% (but maybe I'm wrong)compared to numpy's version as 90% of computation time is spent in MVL anyway.

Nevertheless, here is a possible quick&dirty implementation in Cython:

%%cython -L=<path_mkl_libs> --link-args=-Wl,-rpath=<path_mkl_libs> --link-args=-Wl,--no-as-needed -l=mkl_intel_ilp64 -l=mkl_core -l=mkl_gnu_thread -l=iomp5
# path to mkl can be found via np.show_config()
# which libraries needed: https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor

# another option would be to wrap mkl.h:
cdef extern from *:
    """
    // MKL_INT is 64bit integer for mkl-ilp64
    // see https://software.intel.com/en-us/mkl-developer-reference-c-c-datatypes-specific-to-intel-mkl
    #define MKL_INT long long int
    void  vdExp(MKL_INT n, const double *x, double *y);
    """
    void vdExp(long long int n, const double *x, double *y)

def cy_expsum(const double[:,:] v):
        cdef:
            double[1024] w;
            int n = v.size
            int current = 0;
            double res = 0.0
            int size = 0
            int i = 0
        while current<n:
            size = n-current
            if size>1024:
                size = 1024
            vdExp(size, &v[0,0]+current, w)
            for i in range(size):
                res+=w[i]
            current+=size
        return res

However, it is exactly, what numexpr would do, which also uses Intel's vml as backend:

 import numexpr as ne
 def ne_expsum(x):
     return ne.evaluate("sum(exp(x))")

As for timings we can see the follow:

enter image description here

with following noteworthy details:

  • numpy, numexpr and cython version have almost the same performance for bigger arrays - which is not surprising because they use the same vml-functionality.
  • from these three, cython-version has the least overhead and numexpr the most
  • numexpr-version is probably the easest to write (given that not every numpy distribution plugsin mvl-functionality).

Listings:

Plots:

import numpy as np
def py_expsum(x):
    return np.sum(np.exp(x))

import numba as nb
@nb.jit( nopython=True)    
def nb_expsum(x):
    nx, ny = x.shape
    val = 0.0
    for ix in range(nx):
        for iy in range(ny):
            val += np.exp( x[ix, iy] )
    return val

@nb.jit( nopython=True, parallel=True)    
def nb_expsum2(x):
    nx, ny = x.shape
    val = 0.0
    for ix in range(nx):
        for iy in nb.prange(ny):
            val += np.exp( x[ix, iy]   )
    return val

import perfplot
factor = 1.0 # 0.0 or 1e4
perfplot.show(
    setup=lambda n: factor*np.random.rand(1,n),
    n_range=[2**k for k in range(0,27)],
    kernels=[
        py_expsum, 
        nb_expsum,
        nb_expsum2, 
        ],
    logx=True,
    logy=True,
    xlabel='len(x)'
    )

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...