-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gradients not going backward #2
Comments
Hi, Could you please provide your entire code? Thanks. |
Sure:
model2 = nn.Sequential(QuaternionLinear(784, 128), optimizer = optim.SGD(model.parameters(), lr=0.003, momentum=0.9) `for e in range(epochs):
print("\nTraining Time (in minutes) =",(time()-time0)/60) ` (the view clipped out) |
So the first thing to do is to use AutoGrad by calling QuaternionLinearAutograd instead of QuaternionLinear. Then, and even it should be the same, I would suggest you to update everything from the Pytorch Quaternion Neural Networks repository (that contains also more examples). QuaternionLinear is an optimized layer for memory usage and is two times slower than QuaternionAutograd, but it should work. Let me know if QuaternionLinearAutograd solves the problem (please also update to the latest repository) |
Hello,
I am trying a simple MNIST network:
model = nn.Sequential(QuaternionLinear(784, 128), nn.LeakyReLU(), QuaternionLinear(128, 10), nn.LogSoftmax(dim=1))
However, the loss is never decreasing, I could not find any equations for back propagation of the gradients, but assuming the gradients are not going backward, is this expected? if not how to debug? Would be really helpful, if could provide an example on a simple task (MNIST). Thanks!
Using optim.SGD with lr=0.003, momentum=0.9, ran 10 epochs
The text was updated successfully, but these errors were encountered: