-
Notifications
You must be signed in to change notification settings - Fork 260
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Layer weights change despite trainable flag set to false #201
Comments
Also, the message that is displayed when when training the network "Neural Network with 458656 learnable parameters" Displays the total number of learnable parameters, not the number of parameters that are being learning (prints the same number if trainable=False on some layers) This is the fix, not sure if intended or bug: @staticmethod
def _get_greeting(nn):
shapes = [param.get_value().shape for param in
nn.get_all_params(trainable=True) if param]
nparams = reduce(operator.add, [reduce(operator.mul, shape) for
shape in shapes])
message = ("# Neural Network with {} learnable parameters"
"\n".format(nparams))
return message |
About your first problem, I don't know why the weights change. nolearn should not interfere with that. Have you tried what happens if you use the same layers but train without nolearn? Regarding the second problem, I guess it depends on your definition of "learnable". I believe the main distinction is "learnable" vs hyper-parameters such as the learning rate. Maybe there should be a second sentence with the number of trainable parameters, as you suggested? You could try to make a pull request and see what @dnouri thinks about it. |
I updated the 'greeting' in the way that @caleytown suggested. I think it's what people expect. |
@caleytown If you've initialized your network before, e.g. you've trained it with Something like this should do: # after some training, set some params to be not trainable:
mylayer = ae.layers_['mylayer']
mylayer.params[mylayer.W].remove('trainable')
mylayer.params[mylayer.b].remove('trainable')
# now call intialize to recompile optimizer, then fit:
ae._initialized = False
ae.initialize()
ae.fit(X, X_out) |
I have a network where some of the network layers have the trainable flag set to false. I check the weight values before and after calling "fit" and the weight values change. Even when regularization is set to false, they change. Could this be a bug or am I missing something? Thanks
Jeff
The text was updated successfully, but these errors were encountered: