Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Woking of GANs #7

Open
erilyth opened this issue Dec 20, 2016 · 3 comments
Open

Woking of GANs #7

erilyth opened this issue Dec 20, 2016 · 3 comments

Comments

@erilyth
Copy link

erilyth commented Dec 20, 2016

I'm a little new to Generative Adversarial Networks and was wondering why the samples with iPython notebook2 are worse compared to the iPython notebook1. Another question I had was, when we are training the generator, shouldn't we train it in the opposite manner of how we train the discriminator? (ie. use opposite output labels when we train the generator whereas use the correct labels when we train the discriminator). Thanks!

@engharat
Copy link

As far as I understand, the correct labels are needed for training the generator too, because in order to update the weights in a meaningful way the generator needs the loss (that will be backpropagated trought the GAN in order to update generator weights), and the loss is gained by putting output of generator to input of discriminator, then in output of discriminator you will have a value, then you have your loss that you can backpropagate. In order to do that, we need correct labels! This is what I've understood from the whole training process

@erilyth
Copy link
Author

erilyth commented Jan 30, 2017

@engharat The discriminator should become good at identifying the original images from the fake images, whereas the generator should learn to fool the discriminator, ie. Make it output false outputs. When we train the discriminator alone, we would be using the right labels but when we train the generator (and freeze discriminator weights), I think it ideally we would be training it with the opposite labels since that would help it modify its weights such that based on the current discriminator, it would generate the most incorrect results (ie. it gets fooled well).

@vforvinay
Copy link

@erilyth See this line. Here, before training the GAN as a whole, we assign the output labels as all 1s in the 1 column, which is a label of all real. So yes, when training the GAN, we the opposite label.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants