I am trying to train two models in tensorflow/keras i.e.:
model_r
model_t
on different datasets, then adding theirs losses in such a way
total_loss = loss_r + 0.02 * loss_t
and i want to apply backpropagation by considering total_loss on both models, but unfortunately I am getting Error message
ValueError: No gradients provided for any variable
my code is:
'''
# loss calculate for model_r
with tf.GradientTape() as tape:
logits_ref = model_r(x_ref_batch, training=True)
loss_value_ref = loss_fun_ref(y_ref_batch, logits_ref)
print(f'loss_value_ref {loss_value_ref}')
# loss calculate for model_t
with tf.GradientTape() as tape1:
logits_t = model_t(x_target_batch, training=True)
loss_value_t = loss_fun_t(y_target_batch, logits_t)
print(f'loss_value_t {loss_value_t}')
loss_total = loss_value_ref + lambdaa * loss_value_t
print(f'loss_total {loss_total}')
# gradient for model_a
grads_r = tape.gradient(loss_total, model_r.trainable_weights)
# weights update for model_a
optimizer.apply_gradients(zip(grads_r, model_r.trainable_weights))
# gradient for model_a
grads_t = tape1.gradient(loss_total, model_t.trainable_variables)
# weights update for model_b
optimizer.apply_gradients(zip(grads_t, model_t.trainable_variables))
'''
and i am getting error message :
'''
ValueError: No gradients provided for any variable: ['dense_2/kernel:0', 'dense_2/bias:0', 'dense_3/kernel:0', 'dense_3/bias:0', 'dense_4/kernel:0', 'dense_4/bias:0', 'predictions/kernel:0', 'predictions/bias:0'].
'''
Anyone have the idea how to add two losses from two different models, add them and than backpropagate through total loss in tensorflow 2x?
question from:
https://stackoverflow.com/questions/65889976/tensorflow-adding-losses-of-two-models-and-then-apply-gradient-method 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…