Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
406 views
in Technique[技术] by (71.8m points)

conv neural network - Visualizing output of convolutional layer in tensorflow

I'm trying to visualize the output of a convolutional layer in tensorflow using the function tf.image_summary. I'm already using it successfully in other instances (e. g. visualizing the input image), but have some difficulties reshaping the output here correctly. I have the following conv layer:

img_size = 256
x_image = tf.reshape(x, [-1,img_size, img_size,1], "sketch_image")

W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])

h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)

So the output of h_conv1 would have the shape [-1, img_size, img_size, 32]. Just using tf.image_summary("first_conv", tf.reshape(h_conv1, [-1, img_size, img_size, 1])) Doesn't account for the 32 different kernels, so I'm basically slicing through different feature maps here.

How can I reshape them correctly? Or is there another helper function I could use for including this output in the summary?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

I don't know of a helper function but if you want to see all the filters you can pack them into one image with some fancy uses of tf.transpose.

So if you have a tensor that's images x ix x iy x channels

>>> V = tf.Variable()
>>> print V.get_shape()

TensorShape([Dimension(-1), Dimension(256), Dimension(256), Dimension(32)])

So in this example ix = 256, iy=256, channels=32

first slice off 1 image, and remove the image dimension

V = tf.slice(V,(0,0,0,0),(1,-1,-1,-1)) #V[0,...]
V = tf.reshape(V,(iy,ix,channels))

Next add a couple of pixels of zero padding around the image

ix += 4
iy += 4
V = tf.image.resize_image_with_crop_or_pad(image, iy, ix)

Then reshape so that instead of 32 channels you have 4x8 channels, lets call them cy=4 and cx=8.

V = tf.reshape(V,(iy,ix,cy,cx)) 

Now the tricky part. tf seems to return results in C-order, numpy's default.

The current order, if flattened, would list all the channels for the first pixel (iterating over cx and cy), before listing the channels of the second pixel (incrementing ix). Going across the rows of pixels (ix) before incrementing to the next row (iy).

We want the order that would lay out the images in a grid. So you go across a row of an image (ix), before stepping along the row of channels (cx), when you hit the end of the row of channels you step to the next row in the image (iy) and when you run out or rows in the image you increment to the next row of channels (cy). so:

V = tf.transpose(V,(2,0,3,1)) #cy,iy,cx,ix

Personally I prefer np.einsum for fancy transposes, for readability, but it's not in tf yet.

newtensor = np.einsum('yxYX->YyXx',oldtensor)

anyway, now that the pixels are in the right order, we can safely flatten it into a 2d tensor:

# image_summary needs 4d input
V = tf.reshape(V,(1,cy*iy,cx*ix,1))

try tf.image_summary on that, you should get a grid of little images.

Below is an image of what one gets after following all the steps here.

enter image description here


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...