Skip to content

Commit 4e1e0fc

Browse files
authored
remove duplicated docs (#221)
1 parent 2728b39 commit 4e1e0fc

File tree

3 files changed

+2
-163
lines changed

3 files changed

+2
-163
lines changed

docs/uenv-gromacs.md

Lines changed: 1 addition & 129 deletions
Original file line numberDiff line numberDiff line change
@@ -1,132 +1,4 @@
11
# GROMACS
22

3-
GROMACS (GROningen Machine for Chemical Simulations) is a versatile and widely-used open source package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.
4-
It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers.
53

6-
!!! info "Licensing Terms and Conditions"
7-
8-
GROMACS is a joint effort, with contributions from developers around the world: users agree to acknowledge use of GROMACS in any reports or publications of results obtained with the Software (see [GROMACS Homepage] for details).
9-
10-
## ALPS (GH200)
11-
12-
### Setup
13-
14-
On ALPS, we provide pre-built user environments containing GROMACS alongside all the required dependencies for the GH200 hardware setup. To access the `gmx_mpi` executable, we do the following:
15-
16-
```bash
17-
# list the gromacs images and versions available on the system
18-
uenv image find gromacs
19-
20-
uenv image pull gromacs/2024:v1
21-
uenv start --view=gromacs gromacs/2024:v1 # start the uenv with the gromacs view
22-
23-
gmx_mpi --version # check GROMACS version
24-
```
25-
26-
The images also provide two alternative views, namely `plumed` and `develop`.
27-
After starting the pulled image using `uenv start ...`, one may do the following see the available views.
28-
29-
```bash
30-
$ uenv status
31-
/user-environment:gromacs-gh200
32-
GPU-optimised GROMACS with and without PLUMED, and the toolchain to build your own GROMACS.
33-
modules: no modules available
34-
views:
35-
develop
36-
gromacs
37-
plumed
38-
```
39-
40-
The `develop` view has all the required dependencies or GROMACS without the program itself. This is meant for those users who want to use a customized variant of GROMACS for their simulation which they build from source. This view makes it convenient for users as it provides the required compilers (GCC 12), CMake, CUDA, hwloc, Cray MPICH, among many others which their GROMACS can use during build and installation. Users must enable this view each time they want to use their custom GROMACS installation.
41-
42-
The `plumed` view contains GROMACS 2022.5 (older version) with PLUMED 2.9.0. This is due to the compatibility requirements of PLUMED. CSCS will periodically update these user environment images to feature newer versions as they are made available.
43-
44-
The `gromacs` view contains the newest GROMACS 2024.1 that has been configured and tested for the highest performance on the Grace-Hopper nodes.
45-
46-
### How to Run
47-
48-
To start a job, 2 bash scripts are required: a standard SLURM submission script, and a wrapper to start the CUDA MPS daemon (in order to have multiple MPI ranks per GPU).
49-
50-
The CUDA MPS wrapper here:
51-
```bash
52-
#!/bin/bash
53-
# Example mps-wrapper.sh usage:
54-
# > srun [...] mps-wrapper.sh -- <cmd>
55-
56-
TEMP=$(getopt -o '' -- "$@")
57-
eval set -- "$TEMP"
58-
59-
# Now go through all the options
60-
while true; do
61-
case "$1" in
62-
--)
63-
shift
64-
break
65-
;;
66-
*)
67-
echo "Internal error! $1"
68-
exit 1
69-
;;
70-
esac
71-
done
72-
73-
set -u
74-
75-
export CUDA_MPS_PIPE_DIRECTORY=/tmp/nvidia-mps
76-
export CUDA_MPS_LOG_DIRECTORY=/tmp/nvidia-log
77-
# Launch MPS from a single rank per node
78-
if [ $SLURM_LOCALID -eq 0 ]; then
79-
CUDA_VISIBLE_DEVICES=0,1,2,3 nvidia-cuda-mps-control -d
80-
fi
81-
# Wait for MPS to start sleep 5
82-
sleep 5
83-
84-
exec "$@"
85-
```
86-
87-
The wrapper script above can be made executable with `chmod +x mps-wrapper.sh`.
88-
The SLURM submission script can be adapted from the template below to use the application and the `mps-wrapper.sh` in conjunction.
89-
90-
```bash
91-
#!/bin/bash
92-
93-
#SBATCH --job-name="JOB NAME"
94-
#SBATCH --nodes=1 # number of GH200 nodes with each node having 4 CPU+GPU
95-
#SBATCH --ntasks-per-node=8 # 8 MPI ranks per node
96-
#SBATCH --cpus-per-task 32 # 32 OMP threads per MPI rank
97-
#SBATCH --account=ACCOUNT
98-
#SBATCH --hint=nomultithread
99-
100-
export MPICH_GPU_SUPPORT_ENABLED=1
101-
102-
export GMX_GPU_DD_COMMS=true
103-
export GMX_GPU_PME_PP_COMMS=true
104-
export GMX_FORCE_UPDATE_DEFAULT_GPU=true
105-
export GMX_ENABLE_DIRECT_GPU_COMM=1
106-
export GMX_FORCE_GPU_AWARE_MPI=1
107-
108-
srun ./mps-wrapper.sh -- gmx_mpi mdrun -s input.tpr -ntomp 32 -bonded gpu -nb gpu -pme gpu -pin on -v -noconfout -dlb yes -nstlist 300 -gpu_id 0123 -npme 1 -nsteps 10000 -update gpu
109-
```
110-
111-
This can be run using `sbatch launch.sbatch` on the login node with the user environment loaded.
112-
113-
This submission script is only representative. Users must run their input files with a range of parameters to find an optimal set for the production runs. Some hints for this exploration below:
114-
115-
!!! info "Configuration Hints"
116-
117-
- Each Grace CPU has 72 cores, but a small number of them are used for the underlying processes such as runtime daemons. So all 72 cores are not available for compute. To be safe, do not exceed more than 64 OpenMP threads on a single CPU even if it leads to a handful of cores idling.
118-
- Each node has 4 Grace CPUs and 4 Hopper GPUs. When running 8 MPI ranks (meaning two per CPU), keep in mind to not ask for more than 32 OpenMP threads per rank. That way no more than 64 threads will be running on a single CPU.
119-
- Try running both 64 OMP threads x 1 MPI rank and 32 OMP threads x 2 MPI ranks configurations for the test problems and pick the one giving better performance. While using multiple GPUs, the latter can be faster by 5-10%.
120-
- `-update gpu`  may not be possible for problems that require constraints on all atoms. In such cases, the update (integration) step will be performed on the CPU. This can lead to performance loss of at least 10% on a single GPU. Due to the overheads of additional data transfers on each step, this will also lead to lower scaling performance on multiple GPUs.
121-
- When running on a single GPU, one can either configure the simulation with 1-2 MPI ranks with `-gpu_id`  as `0` , or try running the simulation with a small number of parameters and let GROMACS run with defaults/inferred parameters with a command like the following in the SLURM script:
122-
`srun ./mps-wrapper.sh -- gmx_mpi mdrun -s input.tpr -ntomp 64` 
123-
- Given the compute throughput of each Grace-Hopper module (single CPU+GPU), **for smaller-sized problems, it is possible that a single-GPU run is the fastest**. This may happen when the overheads of communication and orchestration exceed the benefits of parallelism across multiple GPUs. In our test cases, a single Grace-Hopper module has consistently shown a 6-8x performance speedup over a single node on Piz Daint.
124-
125-
!!! warning "Known Performance/Scaling Issues"
126-
127-
- The current build of GROMACS on our system allows **only one MPI rank to be dedicated for PME** with `-nmpe 1`. This becomes a serious performance limitation for larger systems where the non-PME ranks finish their work before the PME rank leading to unwanted load imbalances across ranks. This limitation is targeted to be fixed in the subsequent releases of our builds of user environments.
128-
- The above problem is especially critical for large problem sizes (1+ million atom systems) but is far less apparent in small and medium sized runs.
129-
- If the problem allows the integration step to take place on the GPU with `-update gpu`, that can lead to significant performance and scaling gains as it allows an even greater part of the computations to take place on the GPU.
130-
- SLURM and CUDA MPS configurations are being explored to extend the simulation beyond a single compute node (of 4 CPUs+GPUs). Documentation will be updated once scaling across nodes is reliably reproduced. As of now, **simulations are recommended to be contained to a single node**.
131-
132-
[GROMACS Homepage]: https://www.gromacs.org
4+
This page has moved to the [CSCS Documentation](https://eth-cscs.github.io/cscs-docs/software/sciapps/gromacs/).

docs/uenv-qe.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,3 @@
11
# Quantum ESPRESSO
22

3-
43
This page has moved to the [CSCS Documentation](https://eth-cscs.github.io/cscs-docs/software/sciapps/quantumespresso/#building-qe-from-source)
5-

docs/uenv-vasp.md

Lines changed: 1 addition & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -1,34 +1,3 @@
11
# VASP
22

3-
VASP (Vienna Ab initio Simulation Package) is a software package for performing ab initio quantum mechanical calculations.
4-
5-
> [!NOTE]
6-
> VASP is only available to users with the appropriate license. Check the [VASP website](https://www.vasp.at/sign_in/registration_form/) for licensing.
7-
> Contact the [CSCS service desk](https://support.cscs.ch/) for license verification. Once verified, users are added to the `vasp6` group, which allows access to prebuilt images and the source code.
8-
9-
## Accessing VASP images
10-
11-
!!! failure
12-
13-
Describe access to images. Not yet finalized.
14-
15-
## Usage
16-
The default build of vasp includes MPI, HDF5, Wannier90 and OpenACC (on GH200 and A100 architectures).
17-
Start the uenv and load the `vasp` view:
18-
19-
```
20-
uenv start --view=vasp ${path_to_vasp_image}
21-
```
22-
The `vasp_std`, `vasp_gam` and `vasp_ncl` executables are now available for use.
23-
24-
## Build from source
25-
Start the uenv and load the `develop` view:
26-
```
27-
uenv start --view=vasp ${path_to_vasp_image}
28-
```
29-
This will ensure that compiler executables are in `PATH`.
30-
The appropriate makefile from the `arch` directory in the vasp source tree should be selected and link / include paths changed to use the `/user-environment/env/develop` prefix,
31-
where all required dependencies can be found.
32-
For example on GH200 and A100 architectures, select `makefile.include.nvhpc_omp_acc` and change the gpu flags to match the architecture. In this case `-gpu=cc60,cc70,cc80,cuda11.0` to `-gpu=cc80,cc90,cuda12.2` (depending on the included cuda version). After changing all include / link paths, compile VASP using make (only single thread build with is supported).
33-
34-
3+
This page has moved to the [CSCS Documentation](https://eth-cscs.github.io/cscs-docs/software/sciapps/vasp/#building-vasp-from-source).

0 commit comments

Comments
 (0)