Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
309 views
in Technique[技术] by (71.8m points)

python - Implementing numpy.tensordot code in C++ for arrays having different dimensions

I'm trying to implement numpy.tensordot like summation of product for two different matrices in C++. Although I understand the implementation for arrays of same dimension, I'm unable to figure out the method to use for multiplying a 2-D array of size 3 * 3 with an array of size 3 * 600 * 600. The resultant array should have size 3 * 600 * 600.

To understand the intuition, I tried to work through 3 * 3 * 3 and 3 * 3 arrays on pen and paper but it lead to inconsistent results.

A sample numpy version of my code is as below:

import numpy as np
R = np.arange(9).reshape(3,3)
XYZ = np.arange(3*600*600).reshape*3, 600, 600)

result = np.tensordot(R, XYZ, axes = 1)   

For ease I'm attaching a link to numpy.tensordot documentation.

question from:https://stackoverflow.com/questions/65840904/implementing-numpy-tensordot-code-in-c-for-arrays-having-different-dimensions

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)
In [129]: R = np.arange(9).reshape(3,3)
     ...: XYZ = np.arange(3*600*600).reshape(3, 600, 600)

tensordot has 2 styles of axes, the scalar and the tuple:

In [130]: result = np.tensordot(R, XYZ, axes=1)
In [131]: result.shape
Out[131]: (3, 600, 600)

The einsum equivalent is:

In [132]: res1 = np.einsum('ij,jkl->ikl',R, XYZ)
In [133]: res1.shape
Out[133]: (3, 600, 600)
In [134]: np.allclose(result, res1)
Out[134]: True

and the tuple axes equivalent:

In [135]: res2 = np.tensordot(R, XYZ, axes=(1,0))
In [136]: res2.shape
Out[136]: (3, 600, 600)
In [137]: np.allclose(result, res2)
Out[137]: True

I think the tuple axes style was original, and the integer axes added on top of that. Sometime ago I worked out how the interger cases were translated into the tuple inputs, but I've forgotten the details.

Anyways, the sum-of-products dimension is the last of R and the first of XYZ.

Another way is to multiply element-wise with broadcasting, and then summing:

In [139]: res4=(R[:,:,None,None]*XYZ[None,:,:,:]).sum(axis=1)
In [140]: np.allclose(result, res4)
Out[140]: True

By compressing the last dimensions of XYZ to one, we get the conventional dot product, where the sum-of-products is the last of the first, and 2nd to the last of second:

In [141]: res5 = (R@XYZ.reshape(3,-1)).reshape(3,600,600)
In [142]: np.allclose(result, res5)
Out[142]: True
In [143]: res6 = (R.dot(XYZ.reshape(3,-1))).reshape(3,600,600)
In [144]: np.allclose(result, res6)
Out[144]: True

I believe tensordot is effectively doing [143].


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...