Skip to content

Explicit gradients for an atomistic model #18

Open
@PicoCentauri

Description

@PicoCentauri

When running a metatensor.atomistic.Model in a simulation engine the model predicts the energy and returns it. Metatensor calls the backward on these energies with respect to positions to get the forces. However some models may want to provide explicit forces where there is no need to call backward by metatensor. This applies to non-conservative force models, for models that may have to break the computational graph or just models that compute forces the old fashioned way.

To implement this I see two ways.

  1. The model developer decides: The model adds a TensorBlock as position gradient to the returned TensorMap. The Metatensor engine interface recognizes this and skips the backward
  2. The user decides this: We add an option explicit_gradient as part of the ModelOuptut. If this set to true by the user the model knows that it should compute the gradients by itself.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions