simulating large system with dpa finetuned model when gpu out of memory error is encountered. #4993
mhkcjjmt-debug
started this conversation in
General
Replies: 1 comment
-
for large scale md simulations, it is recommend to distill dpa-2/3 models to the compressible dpa-1 model . please refer to https://www.nature.com/articles/s41524-024-01493-2 for the method. please refer to https://www.bohrium.com/notebooks/76262686918 for an example. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I want to know that whether DPA (1.0 2.0 ro 3.0) models do not support compress. Running "dp --pt comperss " will encounter an error. It seems difficult to run a dpa-finetuned model on a GPU (ca. 10 G memory) when the system has 400+ atoms using lammps after the finetune process. Is there any method to simulate large system with dpa model?
Beta Was this translation helpful? Give feedback.
All reactions