Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
322 views
in Technique[技术] by (71.8m points)

python - Why is accuracy from fit_generator different to that from evaluate_generator in Keras?

What I do:

  • I am training a pre-trained CNN with Keras fit_generator(). This produces evaluation metrics (loss, acc, val_loss, val_acc) after each epoch. After training the model, I produce evaluation metrics (loss, acc) with evaluate_generator().

What I expect:

  • If I train the model for one epoch, I would expect that the metrics obtained with fit_generator() and evaluate_generator() are the same. They both should derive the metrics based on the entire dataset.

What I observe:

  • Both loss and acc are different from fit_generator() and evaluate_generator(): enter image description here

What I don't understand:

  • Why the accuracy from fit_generator() is different to that from evaluate_generator()

My code:

def generate_data(path, imagesize, nBatches):
    datagen = ImageDataGenerator(rescale=1./255)
    generator = datagen.flow_from_directory
        (directory=path,                                        # path to the target directory
         target_size=(imagesize,imagesize),                     # dimensions to which all images found will be resize
         color_mode='rgb',                                      # whether the images will be converted to have 1, 3, or 4 channels
         classes=None,                                          # optional list of class subdirectories
         class_mode='categorical',                              # type of label arrays that are returned
         batch_size=nBatches,                                   # size of the batches of data
         shuffle=True)                                          # whether to shuffle the data
    return generator

[...]

def train_model(model, nBatches, nEpochs, trainGenerator, valGenerator, resultPath):
    history = model.fit_generator(generator=trainGenerator,
                                  steps_per_epoch=trainGenerator.samples//nBatches,     # total number of steps (batches of samples)
                                  epochs=nEpochs,                   # number of epochs to train the model
                                  verbose=2,                        # verbosity mode. 0 = silent, 1 = progress bar, 2 = one line per epoch
                                  callbacks=None,                   # keras.callbacks.Callback instances to apply during training
                                  validation_data=valGenerator,     # generator or tuple on which to evaluate the loss and any model metrics at the end of each epoch
                                  validation_steps=
                                  valGenerator.samples//nBatches,   # number of steps (batches of samples) to yield from validation_data generator before stopping at the end of every epoch
                                  class_weight=None,                # optional dictionary mapping class indices (integers) to a weight (float) value, used for weighting the loss function
                                  max_queue_size=10,                # maximum size for the generator queue
                                  workers=32,                       # maximum number of processes to spin up when using process-based threading
                                  use_multiprocessing=True,         # whether to use process-based threading
                                  shuffle=False,                     # whether to shuffle the order of the batches at the beginning of each epoch
                                  initial_epoch=0)                  # epoch at which to start training
    print("%s: Model trained." % datetime.now().strftime('%Y-%m-%d_%H-%M-%S'))

    # Save model
    modelPath = os.path.join(resultPath, datetime.now().strftime('%Y-%m-%d_%H-%M-%S') + '_modelArchitecture.h5')
    weightsPath = os.path.join(resultPath, datetime.now().strftime('%Y-%m-%d_%H-%M-%S') + '_modelWeights.h5')
    model.save(modelPath)
    model.save_weights(weightsPath)
    print("%s: Model saved." % datetime.now().strftime('%Y-%m-%d_%H-%M-%S'))
    return history, model

[...]

def evaluate_model(model, generator):
    score = model.evaluate_generator(generator=generator,           # Generator yielding tuples
                                     steps=
                                     generator.samples//nBatches)   # number of steps (batches of samples) to yield from generator before stopping

    print("%s: Model evaluated:"
          "
 Loss: %.3f"
          "
 Accuracy: %.3f" %
          (datetime.now().strftime('%Y-%m-%d_%H-%M-%S'),
           score[0], score[1]))

[...]

def main():
    # Create model
    modelUntrained = create_model(imagesize, nBands, nClasses)

    # Prepare training and validation data
    trainGenerator = generate_data(imagePathTraining, imagesize, nBatches)
    valGenerator = generate_data(imagePathValidation, imagesize, nBatches)

    # Train and save model
    history, modelTrained = train_model(modelUntrained, nBatches, nEpochs, trainGenerator, valGenerator, resultPath)

    # Evaluate on validation data
    print("%s: Model evaluation (valX, valY):" % datetime.now().strftime('%Y-%m-%d_%H-%M-%S'))
    evaluate_model(modelTrained, valGenerator)

    # Evaluate on training data
    print("%s: Model evaluation (trainX, trainY):" % datetime.now().strftime('%Y-%m-%d_%H-%M-%S'))
    evaluate_model(modelTrained, trainGenerator)

Update

I found some sites that report on this issue:

I tried following some of their suggested solutions without success so far. acc and loss are still different from fit_generator() and evaluate_generator(), even when using the exact same data generated with the same generator for training and validation. Here is what I tried:

  • statically setting the learning_phase for the entire script or before adding new layers to the pre-trained ones
    K.set_learning_phase(0) # testing  
    K.set_learning_phase(1) # training
  • unfreezing all batch normalization layers from the pre-trained model
    for i in range(len(model.layers)):
        if str.startswith(model.layers[i].name, 'bn'):
            model.layers[i].trainable=True
  • not adding dropout or batch normalization as untrained layers
    # Create pre-trained base model
    basemodel = ResNet50(include_top=False,                     # exclude final pooling and fully connected layer in the original model
                         weights='imagenet',                    # pre-training on ImageNet
                         input_tensor=None,                     # optional tensor to use as image input for the model
                         input_shape=(imagesize,                # shape tuple
                                      imagesize,
                                      nBands),
                         pooling=None,                          # output of the model will be the 4D tensor output of the last convolutional layer
                         classes=nClasses)                      # number of classes to classify images into

    # Create new untrained layers
    x = basemodel.output
    x = GlobalAveragePooling2D()(x)                             # global spatial average pooling layer
    x = Dense(1024, activation='relu')(x)                       # fully-connected layer
    y = Dense(nClasses, activation='softmax')(x)                # logistic layer making sure that probabilities sum up to 1

    # Create model combining pre-trained base model and new untrained layers
    model = Model(inputs=basemodel.input,
                  outputs=y)

    # Freeze weights on pre-trained layers
    for layer in basemodel.layers:
        layer.trainable = False

    # Define learning optimizer
    learningRate = 0.01
    optimizerSGD = optimizers.SGD(lr=learningRate,              # learning rate.
                                  momentum=0.9,                 # parameter that accelerates SGD in the relevant direction and dampens oscillations
                                  decay=learningRate/nEpochs,   # learning rate decay over each update
                                  nesterov=True)                # whether to apply Nesterov momentum
    # Compile model
    model.compile(optimizer=optimizerSGD,                       # stochastic gradient descent optimizer
                  loss='categorical_crossentropy',              # objective function
                  metrics=['accuracy'],                         # metrics to be evaluated by the model during training and testing
                  loss_weights=None,                            # scalar coefficients to weight the loss contributions of different model outputs
                  sample_weight_mode=None,                      # sample-wise weights
                  weighted_metrics=None,                        # metrics to be evaluated and weighted by sample_weight or class_weight during training and testing
                  target_tensors=None)                          # tensor model's target, which will be fed with the target data during training
  • using different pre-trained CNNs as base model (VGG19, InceptionV3, InceptionResNetV2, Xception)
    from keras.applications.vgg19 import VGG19

    basemodel = VGG19(include_top=False,                        # exclude final pooling and fully connected layer in the original model
                         weights='imagenet',                    # pre-training on ImageNet
                         input_tensor=None,                     # optional tensor to use as image input for the model
                         input_shape=(imagesize,                # shape tuple
                                      imagesize,
                                      nBands),
                         pooling=None,                          # output of the model will be the 4D tensor output of the last convolutional layer
                         classes=nClasses)                      # number of classes to classify images into

Please let me know


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

I now managed having the same evaluation metrics. I changed the following:

  • I set seed in flow_from_directory() as suggested by @Anakin
def generate_data(path, imagesize, nBatches):
        datagen = ImageDataGenerator(rescale=1./255)
        generator = datagen.flow_from_directory(directory=path,     # path to the target directory
             target_size=(imagesize,imagesize),                     # dimensions to which all images found will be resize
             color_mode='rgb',                                      # whether the images will be converted to have 1, 3, or 4 channels
             classes=None,                                          # optional list of class subdirectories
             class_mode='categorical',                              # type of label arrays that are returned
             batch_size=nBatches,                                   # size of the batches of data
             shuffle=True,                                          # whether to shuffle the data
             seed=42)                                               # random seed for shuffling and transformations
        return generator

  • I set use_multiprocessing=False in fit_generator() according to the warning: use_multiprocessing=True and multiple workers may duplicate your data
history = model.fit_generator(generator=trainGenerator,
                                  steps_per_epoch=trainGenerator.samples//nBatches,     # total number of steps (batches of samples)
                                  epochs=nEpochs,                   # number of epochs to train the model
                                  verbose=2,                        # verbosity mode. 0 = silent, 1 = progress bar, 2 = one line per epoch
                                  callbacks=callback,               # keras.callbacks.Callback instances to apply during training
                                  validation_data=valGenerator,     # generator or tuple on which to evaluate the loss and any model metrics at the end of each epoch
                                  validation_steps=
                                  valGenerator.samples//nBatches,   # number of steps (batches of samples) to yield from validation_data generator before stopping at the end of every epoch
                                  class_weight=None,                # optional dictionary mapping class indices (integers) to a weight (float) value, used for weighting the loss function
                                  max_queue_size=10,                # maximum size for the generator queue
                                  workers=1,                        # maximum number of processes to spin up when using process-based threading
                                  use_multiprocessing=False,        # whether to use process-based threading
                                  shuffle=False,                    # whether to shuffle the order of the batches at the beginning of each epoch
                                  initial_epoch=0)                  # epoch at which to start training

  • I unified my python setup as suggested in the keras documentation on how to obtain reproducible results using Keras during development
import tensorflow as tf
import random as rn
from keras import backend as K

np.random.seed(42)
rn.seed(12345)
session_conf = tf.ConfigProto(intra_op_parallelism_threads=1,
                              inter_op_parallelism_threads=1)
tf.set_random_seed(1234)
sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
K.set_session(sess)

  • Instead of rescaling input images with datagen = ImageDataGenerator(rescale=1./255), I now generate my data with:
from keras.applications.resnet50 import preprocess_input
datagen = ImageDataGenerator(preprocessing_function=preprocess_input)

With this, I managed to have a similar accuracy and loss from fit_generator() and evaluate_generator(). Also, using the same data for training and testing now results in a similar metrics. Reasons for remaining differences are provided in the keras documentation.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...