-
Notifications
You must be signed in to change notification settings - Fork 177
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
make_trainable() does not freeze weights #10
Comments
Confirmed that >>> discriminator.predict(X)
array([[ 0.52295244, 0.47704756],
[ 0.54938567, 0.45061436]], dtype=float32)
>>> make_trainable(discriminator, False)
>>> discriminator.train_on_batch(X, y)
>>> discriminator.predict(X)
array([[ 0.4992643 , 0.50073564],
[ 0.64071965, 0.35928035]], dtype=float32) |
make_trainable()
does not freeze weights
make_trainable()
does not freeze weights
I think you are right. The re-compilation makes the weights frozen. |
Yes, I make a call to |
One this I noticed, but am not totally sure of, is that by adding this line, it makes the the discriminator part of the GAN untrainable before the GAN is compiled. This would make it so that the discriminator model itself is trainable, but the discriminator part of the GAN is not, which is exactly what we would want. |
With Keras 2.0.4 I tried |
You define a function
make_trainable()
which sets every layer'strainable
attribute to eitherTrue
orFalse
and call this repeatedly during training. However, settingkeras.layers.Layer.trainable
doesn't have any effect unless you follow it up with recompiling the model. So I'm pretty sure that your layers are unfrozen during the entire training process since you only compile once.I'll take a stab at verifying this shortly.
The text was updated successfully, but these errors were encountered: