Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
562 views
in Technique[技术] by (71.8m points)

python - How to compute all second derivatives (only the diagonal of the Hessian matrix) in Tensorflow?

I have a loss value/function and I would like to compute all the second derivatives with respect to a tensor f (of size n). I managed to use tf.gradients twice, but when applying it for the second time, it sums the derivatives across the first input (see second_derivatives in my code).

Also I managed to retrieve the Hessian matrix, but I would like to only compute its diagonal to avoid extra-computation.

import tensorflow as tf
import numpy as np

f = tf.Variable(np.array([[1., 2., 0]]).T)
loss = tf.reduce_prod(f ** 2 - 3 * f + 1)

first_derivatives = tf.gradients(loss, f)[0]

second_derivatives = tf.gradients(first_derivatives, f)[0]

hessian = [tf.gradients(first_derivatives[i,0], f)[0][:,0] for i in range(3)]

model = tf.initialize_all_variables()
with tf.Session() as sess:
    sess.run(model)
    print "
loss
", sess.run(loss)
    print "
loss'
", sess.run(first_derivatives)
    print "
loss''
", sess.run(second_derivatives)
    hessian_value = np.array(map(list, sess.run(hessian)))
    print "
Hessian
", hessian_value

My thinking was that tf.gradients(first_derivatives, f[0, 0])[0] would work to retrieve for instance the second derivative with respect to f_0 but it seems that tensorflow doesn't allow to derive from a slice of a tensor.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

tf.gradients([f1,f2,f3],...) computes gradient of f=f1+f2+f3 Also, differentiating with respect to x[0] is problematic because x[0] refers to a new Slice node which is not an ancestor of your loss, so derivative with respect to it will be None. You could get around it by using pack to glue x[0], x[1], ... together into xx and have your loss depend on xx instead of x . An alternative is to use separate variables for individual components, in which case computing Hessian would look something like this.

def replace_none_with_zero(l):
  return [0 if i==None else i for i in l] 

tf.reset_default_graph()

x = tf.Variable(1.)
y = tf.Variable(1.)
loss = tf.square(x) + tf.square(y)
grads = tf.gradients([loss], [x, y])
hess0 = replace_none_with_zero(tf.gradients([grads[0]], [x, y]))
hess1 = replace_none_with_zero(tf.gradients([grads[1]], [x, y]))
hessian = tf.pack([tf.pack(hess0), tf.pack(hess1)])
sess = tf.InteractiveSession()
sess.run(tf.initialize_all_variables())
print hessian.eval()

You'll see

[[ 2.  0.]
 [ 0.  2.]]

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

1.4m articles

1.4m replys

5 comments

56.9k users

...