-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenMMException when adding TorchForce to the system #134
Comments
Please let me know if you require any more information. |
I believe this is an error with your model. In particular because lines like this: pos = positions[::4].to("cpu")
...
y = self.encoder(x)[:,1].sum() Pytorch does not like when you run backwards only on a subset of the output. import torch
pos = torch.rand(10, 3)
box = torch.eye(3) * 10
model = torch.jit.load("model.pt")
pos.requires_grad_()
y = model(pos, box)
y.backward() # compute gradients
print(pos.grad) As a side note, the box is already passed to your model as a 3x3 pytorch tensor, you should not need to convert it. You can extract its diagonal with "box.diag()" |
Thanks! I got your point. It does returns a NoneType. It makes sense. I'll try to modify the model to operable on whole system. But are there any trick to do certain operation on subset of the positions? Say I want only the oxygen atom of my system! |
Also I did noticed the |
You want your TorchForce to act only on a subset of the system? |
Yes. Thanks! I'll try that. |
I am trying to use TorchForce to bias a simulation(box-full of waters). The torch model that calculates the CV looks for nearest neighbors of reference water molecule (within cutoff) then calculates pairwise distance between them, this is my feature. Now when adding this jitted model to openmm system it throws OpenMMException. Probably the issue is regarding grad of the tensor I'm returning from my TorchForce model.
The model used in TorchForce:
The openmm simulation(with MetaD):
The error it throws:
My conda environment:
conda list:
Used mamba to install openmm-torch.
I suppose this is a question and not likely a bug. It will be really helpful you can find where am I making mistake!
The text was updated successfully, but these errors were encountered: