Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
676 views
in Technique[技术] by (71.8m points)

python - matrix determinant differentiation in tensorflow

I am interested in computing the derivative of a matrix determinant using TensorFlow. I can see from experimentation that TensorFlow has not implemented a method of differentiating through a determinant:

LookupError: No gradient defined for operation 'MatrixDeterminant' 
(op type: MatrixDeterminant)

A little further investigation revealed that it is actually possible to compute the derivative; see for example Jacobi's formula. I determined that in order to implement this means of differentiating through a determinant that I need to use the function decorator,

@tf.RegisterGradient("MatrixDeterminant")
def _sub_grad(op, grad):
    ...

However, I am not familiar enough with tensor flow to understand how this can be accomplished. Does anyone have any insight on this matter?

Here's an example where I run into this issue:

x = tf.Variable(tf.ones(shape=[1]))
y = tf.Variable(tf.ones(shape=[1]))

A = tf.reshape(
    tf.pack([tf.sin(x), tf.zeros([1, ]), tf.zeros([1, ]), tf.cos(y)]), (2,2)
)
loss = tf.square(tf.matrix_determinant(A))


optimizer = tf.train.GradientDescentOptimizer(0.001)
train = optimizer.minimize(loss)

init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)


for step in xrange(100):
    sess.run(train)
    print sess.run(x)
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Please check "Implement Gradient in Python" section here

In particular, you can implement it as follows

@ops.RegisterGradient("MatrixDeterminant")
def _MatrixDeterminantGrad(op, grad):
  """Gradient for MatrixDeterminant. Use formula from 2.2.4 from
  An extended collection of matrix derivative results for forward and reverse
  mode algorithmic differentiation by Mike Giles
  -- http://eprints.maths.ox.ac.uk/1079/1/NA-08-01.pdf
"""
  A = op.inputs[0]
  C = op.outputs[0]
  Ainv = tf.matrix_inverse(A)
  return grad*C*tf.transpose(Ainv)

Then a simple training loop to check that it works:

a0 = np.array([[1,2],[3,4]]).astype(np.float32)
a = tf.Variable(a0)
b = tf.square(tf.matrix_determinant(a))
init_op = tf.initialize_all_variables()
sess = tf.InteractiveSession()
init_op.run()

minimization_steps = 50
learning_rate = 0.001
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
train_op = optimizer.minimize(b)

losses = []
for i in range(minimization_steps):
  train_op.run()
  losses.append(b.eval())

Then you can visualize your loss over time

import matplotlib.pyplot as plt

plt.ylabel("Determinant Squared")
plt.xlabel("Iterations")
plt.plot(losses)

Should see something like this Loss plot


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...