Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
3.6k views
in Technique[技术] by (71.8m points)

tensorflow - Not stable training in CNN

I am currently doing a image classfication which takes up to 81000 data to train the model(CNN). It has a very not stable val_training accuracy and val_accuracy between each epochs. I have done data augmentation for the training as well. Here is the training result for loss and accuracy. It goes all the way to 100 epochs with the same flow like this. Should I change my regularization method or the architecture of my network?

model.summary()

Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 128, 128, 32)      896       
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 64, 64, 32)        0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 64, 64, 32)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 64, 64, 64)        18496     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 32, 32, 64)        0         
_________________________________________________________________
dropout_2 (Dropout)          (None, 32, 32, 64)        0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 32, 32, 64)        36928     
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 16, 16, 64)        0         
_________________________________________________________________
dropout_3 (Dropout)          (None, 16, 16, 64)        0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 16, 16, 128)       73856     
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 8, 8, 128)         0         
_________________________________________________________________
dropout_4 (Dropout)          (None, 8, 8, 128)         0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 8192)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 512)               4194816   
_________________________________________________________________
dropout_5 (Dropout)          (None, 512)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 162)               83106     
=================================================================
Total params: 4,408,098
Trainable params: 4,408,098
Non-trainable params: 0



Epoch 1/100
2430/2430 [==============================] - 2817s 1s/step - loss: 4.0652 - accuracy: 0.1137 - val_loss: 2.5681 - val_accuracy: 0.3385
Epoch 2/100
2430/2430 [==============================] - 2263s 931ms/step - loss: 2.1216 - accuracy: 0.4463 - val_loss: 2.1476 - val_accuracy: 0.5516
Epoch 3/100
2430/2430 [==============================] - 2250s 926ms/step - loss: 1.4840 - accuracy: 0.5960 - val_loss: 1.4907 - val_accuracy: 0.5631
Epoch 4/100
2430/2430 [==============================] - 2705s 1s/step - loss: 1.1820 - accuracy: 0.6690 - val_loss: 1.0003 - val_accuracy: 0.6717
Epoch 5/100
2430/2430 [==============================] - 2470s 1s/step - loss: 0.9978 - accuracy: 0.7211 - val_loss: 0.7172 - val_accuracy: 0.7038
Epoch 6/100
2430/2430 [==============================] - 2850s 1s/step - loss: 0.8731 - accuracy: 0.7522 - val_loss: 0.7637 - val_accuracy: 0.7460
Epoch 7/100
2430/2430 [==============================] - 2819s 1s/step - loss: 0.7883 - accuracy: 0.7748 - val_loss: 0.7909 - val_accuracy: 0.7278
Epoch 8/100
2430/2430 [==============================] - 2725s 1s/step - loss: 0.7235 - accuracy: 0.7939 - val_loss: 0.7154 - val_accuracy: 0.7369
Epoch 9/100
2430/2430 [==============================] - 2642s 1s/step - loss: 0.6703 - accuracy: 0.8062 - val_loss: 0.6727 - val_accuracy: 0.7158
Epoch 10/100
2430/2430 [==============================] - 2673s 1s/step - loss: 0.6331 - accuracy: 0.8163 - val_loss: 0.9074 - val_accuracy: 0.7794
Epoch 11/100
2430/2430 [==============================] - 2517s 1s/step - loss: 0.5998 - accuracy: 0.8283 - val_loss: 0.3628 - val_accuracy: 0.8017
Epoch 12/100
2430/2430 [==============================] - 2537s 1s/step - loss: 0.5726 - accuracy: 0.8366 - val_loss: 0.3375 - val_accuracy: 0.7677
Epoch 13/100
2430/2430 [==============================] - 2788s 1s/step - loss: 0.5540 - accuracy: 0.8380 - val_loss: 0.9867 - val_accuracy: 0.7475
Epoch 14/100
2430/2430 [==============================] - 2575s 1s/step - loss: 0.5289 - accuracy: 0.8467 - val_loss: 1.2910 - val_accuracy: 0.7871
Epoch 15/100
2430/2430 [==============================] - 2720s 1s/step - loss: 0.5085 - accuracy: 0.8522 - val_loss: 0.4738 - val_accuracy: 0.8069
Epoch 16/100
2430/2430 [==============================] - 2880s 1s/step - loss: 0.4929 - accuracy: 0.8563 - val_loss: 0.3417 - val_accuracy: 0.8237
Epoch 17/100
2430/2430 [==============================] - 2587s 1s/step - loss: 0.4900 - accuracy: 0.8571 - val_loss: 0.3708 - val_accuracy: 0.8212
Epoch 18/100
2430/2430 [==============================] - 2603s 1s/step - loss: 0.4826 - accuracy: 0.8600 - val_loss: 0.9994 - val_accuracy: 0.7801
Epoch 19/100
2430/2430 [==============================] - 2792s 1s/step - loss: 0.4728 - accuracy: 0.8630 - val_loss: 0.4388 - val_accuracy: 0.8108
Epoch 20/100
2430/2430 [==============================] - 2450s 1s/step - loss: 0.4510 - accuracy: 0.8700 - val_loss: 0.6080 - val_accuracy: 0.7988
Epoch 21/100
2430/2430 [==============================] - 2516s 1s/step - loss: 0.4571 - accuracy: 0.8666 - val_loss: 0.4918 - val_accuracy: 0.7780

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

It looks to me like a pretty normal training of a CNN. Your model is basically reaching what is known as a local minimum!

This means it is basically bouncing back and forth over optimal model parameters without ever getting it exactly right. This is because your learning rate might be too high.

If you plan on training for a long long time you can simply lower the learning rate and increase the epochs, or even better you can use a custom learning rate scheduler.

Specifically exponential decay could work well.

If you are using tensorflow/keras, take a look here --- Exponential Decay Optimizer

You pass can define this when you compile your model like this

model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=lr_schedule),
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

1.4m articles

1.4m replys

5 comments

57.0k users

...