Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
701 views
in Technique[技术] by (71.8m points)

tensorflow - Dropout with densely connected layer

Iam using a densenet model for one of my projects and have some difficulties using regularization.

Without any regularization, both validation and training loss (MSE) decrease. The training loss drops faster though, resulting in some overfitting of the final model.

So I decided to use dropout to avoid overfitting. When using Dropout, both validation and training loss decrease to about 0.13 during the first epoch and remain constant for about 10 epochs.

After that both loss functions decrease in the same way as without dropout, resulting in overfitting again. The final loss value is in about the same range as without dropout.

So for me it seems like dropout is not really working.

If I switch to L2 regularization though, Iam able to avoid overfitting, but I would rather use Dropout as a regularizer.

Now Iam wondering if anyone has experienced that kind of behaviour?

I use dropout in both the dense block (bottleneck layer) and in the transition block (dropout rate = 0.5):

def bottleneck_layer(self, x, scope):
    with tf.name_scope(scope):
        x = Batch_Normalization(x, training=self.training, scope=scope+'_batch1')
        x = Relu(x)
        x = conv_layer(x, filter=4 * self.filters, kernel=[1,1], layer_name=scope+'_conv1')
        x = Drop_out(x, rate=dropout_rate, training=self.training)

        x = Batch_Normalization(x, training=self.training, scope=scope+'_batch2')
        x = Relu(x)
        x = conv_layer(x, filter=self.filters, kernel=[3,3], layer_name=scope+'_conv2')
        x = Drop_out(x, rate=dropout_rate, training=self.training)

        return x

def transition_layer(self, x, scope):
    with tf.name_scope(scope):
        x = Batch_Normalization(x, training=self.training, scope=scope+'_batch1')
        x = Relu(x)
        x = conv_layer(x, filter=self.filters, kernel=[1,1], layer_name=scope+'_conv1')
        x = Drop_out(x, rate=dropout_rate, training=self.training)
        x = Average_pooling(x, pool_size=[2,2], stride=2)

        return x
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Without any regularization, both validation and training loss (MSE) decrease. The training loss drops faster though, resulting in some overfitting of the final model.

This is not overfitting.

Overfitting starts when your validation loss starts increasing, while your training loss continues decreasing; here is its telltale signature:

enter image description here

The image is adapted from the Wikipedia entry on overfitting - diferent things may lie in the horizontal axis, e.g. depth or number of boosted trees, number of neural net fitting iterations etc.

The (generally expected) difference between training and validation loss is something completely different, called the generalization gap:

An important concept for understanding generalization is the generalization gap, i.e., the difference between a model’s performance on training data and its performance on unseen data drawn from the same distribution.

where, practically speaking, validation data is unseen data indeed.

So for me it seems like dropout is not really working.

It can very well be the case - dropout is not expected to work always and for every problem.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...