Skip to content

Commit 2270256

Browse files
authored
Fix automatic CUDA graphing not working when requiring backwards (#120)
* Pass posTensor as input argument to energyTensor.backwards. This instructs torch to compute gradients only with respect to positions. * Add comments
1 parent a0b51a9 commit 2270256

File tree

1 file changed

+5
-1
lines changed

1 file changed

+5
-1
lines changed

platforms/cuda/src/CudaTorchKernels.cpp

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -188,7 +188,11 @@ static void executeGraph(bool outputsForces, bool includeForces, torch::jit::scr
188188
energyTensor = module.forward(inputs).toTensor();
189189
// Compute force by backpropagating the PyTorch model
190190
if (includeForces) {
191-
energyTensor.backward();
191+
// CUDA graph capture sometimes fails if backwards is not explicitly requested w.r.t positions
192+
// See https://github.com/openmm/openmm-torch/pull/120/
193+
auto none = torch::Tensor();
194+
energyTensor.backward(none, false, false, posTensor);
195+
// This is minus the forces, we change the sign later on
192196
forceTensor = posTensor.grad().clone();
193197
// Zero the gradient to avoid accumulating it
194198
posTensor.grad().zero_();

0 commit comments

Comments
 (0)