Skip to content

Commit

Permalink
Merge pull request #159 from luigibonati/committor_update
Browse files Browse the repository at this point in the history
Committor tutorials update
  • Loading branch information
EnricoTrizio authored Nov 7, 2024
2 parents 0364447 + d01260c commit 6576f08
Show file tree
Hide file tree
Showing 5 changed files with 48 additions and 5 deletions.
6 changes: 4 additions & 2 deletions docs/notebooks/examples/ex_committor.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,9 @@
"source": [
"# Learning the committor for Alanine with distances as inputs\n",
"Reference paper: \n",
"_Kang, Trizio and Parrinello, [Nat Comput Sci](https://doi.org/10.1038/s43588-024-00645-0) (2024), [ArXiv](https://arxiv.org/abs/2401.05279)_\n",
"_Kang, Trizio and Parrinello, [Nat Comput Sci](https://doi.org/10.1038/s43588-024-00645-0) (2024), [ArXiv](https://arxiv.org/abs/2410.17029) (2024)_\n",
"\n",
"Prerequisite: Committor tutorial\n",
"\n",
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/luigibonati/mlcolvar/blob/committor/docs/notebooks/examples/ex_committor.ipynb)"
]
Expand Down Expand Up @@ -1158,7 +1160,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.0"
"version": "3.10.14"
}
},
"nbformat": 4,
Expand Down
30 changes: 28 additions & 2 deletions docs/notebooks/tutorials/cvs_committor.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
"source": [
"# Learning the committor\n",
"Reference paper: \n",
"_Kang, Trizio and Parrinello, [Nat Comput Sci](https://doi.org/10.1038/s43588-024-00645-0) (2024), [ArXiv](https://arxiv.org/abs/2401.05279)_\n",
"_Kang, Trizio and Parrinello, [Nat Comput Sci](https://doi.org/10.1038/s43588-024-00645-0) (2024), [ArXiv](https://arxiv.org/abs/2410.17029) (2024)_\n",
"\n",
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/luigibonati/mlcolvar/blob/committor/docs/notebooks/tutorials/cvs_committor.ipynb)\n"
]
Expand All @@ -20,12 +20,38 @@
"\n",
"Given an a system presenting two metastable states $A$ and $B$, the commmittor $q(\\mathbf{x})$ is a function that for each configuration $\\mathbf{x}$gives the probability that it will evolve to $B$ before having passed through $A$.\n",
"\n",
"#### Learning the committor\n",
"One way to learn the committor function is to leverage the variational principle introduced by Kolmogorov, which amounts to satisfying the boundary conditions\n",
"$$ q(\\mathbf{x}_A) = 0 \\quad \\text{and} \\quad q(\\mathbf{x}_B) = 1$$\n",
"to minimizing the functional $K[q(\\mathbf{x})]$ of the committor\n",
"$$ K[q(\\mathbf{x})] \\quad=\\quad \\frac{1}{Z} \\int | \\nabla_u q(\\mathbf{x})|^2 e^{-\\beta U (\\mathbf{x})} d \\mathbf{x} \\quad=\\quad \\langle | \\nabla_u q(\\mathbf{x})|^2 \\rangle_{U (\\mathbf{x})}$$\n",
"where $\\nabla_u$ denotes the gradient wrt the mass-scaled Cartesian coordinates, $Z$ is the partition function function associated to the potential $U(\\mathbf(x))$ and the last term represent the ensemble average over the corresponding Boltzmann ensemble.\n",
" \n",
"\n",
"To do it practically, we parametrize the committor as a Neural Network (NN) $q_\\theta(\\mathbf{x})$ and we minimize the variational principle for its optimization.\n",
"More in detail, we use some physical descriptors $\\mathbf{d}(\\mathbf{x})$ as input of the NN, obtain an output $z(\\mathbf{x})=NN(\\mathbf{d}(\\mathbf{x}))$ to which we apply a sigmoid-like activation function $\\sigma$ to help impose the right functional form to the final committor function $q(\\mathbf{x})=\\sigma(z(\\mathbf{x}))$.\n",
"\n",
"#### Kolmogorov bias potential\n",
"As most of the contribution to $K[q(\\mathbf{x})]$ comes from the TS region which is hard to sample in conventional MD runs, we introduced the TS-oriented **Kolmogorov bias potential** \n",
"$$ V_K = -\\frac{\\lambda}{\\beta} \\log(|\\nabla q(\\mathbf{x})|^2) $$\n",
"which allows extensively sampling the TS region, thus enabling the use of the variational principle. \n",
"\n",
"#### Effective committor-based CV\n",
"Even if the committor provides an *ideal* reaction coordinate as it allows describing the reactive process from state A to state B, it is not a suitable *collective variable* for enhanced sampling.\n",
"This is because all the configurations from the metastable basins are degenerate along the committor, if not for very tiny numerical differences, thus making it impossible to use it with enhanced sampling algorithms such as OPES or Metadynamics.\n",
"\n",
"A solution to this issue is not to use directly the committor $q$ as a CV, but the *non-activated* $z$ function, which encodes the same information but is suitable for an enhanced sampling setup.\n",
"\n",
"<center><img src=\"images/committor_cv.png\" width=\"800\" /></center>\n",
"\n",
"#### Extensive sampling along the committor-based CV\n",
"As we have a good CV to be used, we can combine the $V_K$ bias, which stabilizes the TS region and promotes its sampling, with a CV-based algorithm like OPES, which fills the basins and promote transitions with them. This way we can cover the whole phase space sampling extensively both the transition and metastable states.\n",
"\n",
"<center><img src=\"images/OPES_VK.png\" width=\"800\" /></center>\n",
"\n",
"\n",
"\n",
"\n",
"\n",
" "
]
},
Expand Down
Binary file added docs/notebooks/tutorials/images/OPES_VK.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/notebooks/tutorials/images/committor_cv.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
17 changes: 16 additions & 1 deletion mlcolvar/core/transform/descriptors/coordination_numbers.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,7 @@ def __init__(self,
self._group_A_size = len(group_A)
self.group_B = group_B
self._group_B_size = len(group_B)
self._n_used_atoms = self._group_A_size + self._group_B_size
self._reordering = np.concatenate((self.group_A, self.group_B))
self.cutoff = cutoff
self.n_atoms = n_atoms
Expand All @@ -74,7 +75,7 @@ def compute_coordination_number(self, pos):
pos, batch_size = sanitize_positions_shape(pos, self.n_atoms)
pos = pos[:, self._reordering, :]
dist = compute_distances_matrix(pos=pos,
n_atoms=self.n_atoms,
n_atoms=self._n_used_atoms,
PBC=self.PBC,
cell=self.cell,
scaled_coords=self.scaled_coords)
Expand Down Expand Up @@ -195,6 +196,20 @@ def test_coordination_number():
out_2.sum().backward()
assert(torch.allclose(out, out_2))

# check using only subset of atoms
model = CoordinationNumbers(group_A=[2, 3],
group_B=[0, 1, 4, 5, 6],
cutoff=cutoff,
n_atoms=n_atoms,
PBC=True,
cell=cell,
mode='continuous',
scaled_coords=False,
switching_function=switching_function)

out = model(pos)
out.sum().backward()

# TODO add reference value for check

if __name__ == "__main__":
Expand Down

0 comments on commit 6576f08

Please sign in to comment.