-
Notifications
You must be signed in to change notification settings - Fork 177
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Woking of GANs #7
Comments
As far as I understand, the correct labels are needed for training the generator too, because in order to update the weights in a meaningful way the generator needs the loss (that will be backpropagated trought the GAN in order to update generator weights), and the loss is gained by putting output of generator to input of discriminator, then in output of discriminator you will have a value, then you have your loss that you can backpropagate. In order to do that, we need correct labels! This is what I've understood from the whole training process |
@engharat The discriminator should become good at identifying the original images from the fake images, whereas the generator should learn to fool the discriminator, ie. Make it output false outputs. When we train the discriminator alone, we would be using the right labels but when we train the generator (and freeze discriminator weights), I think it ideally we would be training it with the opposite labels since that would help it modify its weights such that based on the current discriminator, it would generate the most incorrect results (ie. it gets fooled well). |
I'm a little new to Generative Adversarial Networks and was wondering why the samples with iPython notebook2 are worse compared to the iPython notebook1. Another question I had was, when we are training the generator, shouldn't we train it in the opposite manner of how we train the discriminator? (ie. use opposite output labels when we train the generator whereas use the correct labels when we train the discriminator). Thanks!
The text was updated successfully, but these errors were encountered: