Open
Description
Hello Sir, Thanks for a very useful repository about GANs.
Would mind clarifying something about GAN loss? It's about the "sign" return by d_loss_fn
and g_loss_fn
in this snippet.
def get_loss_fn():
def d_loss_fn(real_logits, fake_logits):
return -tf.reduce_mean(tf.math.log(real_logits + 1e-10) + tf.math.log(1. - fake_logits + 1e-10))
def g_loss_fn(fake_logits):
return -tf.reduce_mean(tf.math.log(fake_logits + 1e-10))
return d_loss_fn,
Is this just a binary crossentropy? if it's just a binary crossentropy can I use the loss defined in DCGAN?
And you put a minus sign to make a positive return value am I correct?
I find that your implementation is the closest to those implemented in papers but slightly different in the sign of value.
Metadata
Metadata
Assignees
Labels
No labels