The repository provides benchmarks for GROMACS.
Under development.
- CPU: x86, Arm64, PPC64
- GPU: NVIDIA, AMD, Intel*
- Programming languages: C, C++, Python*
- Parallel models: MPI, OpenMP*
- Accelerator offload models: CUDA, OpenCL, SYCL
- Dense linear algebra
- Sparse linear algebra
- Spectral methods
- N-body methods
- Structured grids
- Unstructured grids
- Monte Carlo
The benchmark can be built using Spack or manually. If you are using the ReFrame method to run the benchmark described below, it will automatically perform the build step for you.
Once it has been built the benchmark executable is called name-of-exe.x
Gromacs can be installed from a spack repository maintained by the core developers:
For GROMACS with MPI for CPU based architecture
spack install [email protected] +mpiFor an MPI + CUDA build for NVIDA GPU architecture
spack install [email protected] +mpi +cudaFor a multinode, multiGPU compilation targetting specific GPU arch
spack install [email protected] +mpi +cuda cuda_arch=70,80,90 +cufftmpOften better for single NVIDA GPU tests is to use threadMPI and CUDA
spack install [email protected] ~mpi +cudaEnable SYCL for use with Intel and AMD GPUs, may need to add hardware specific backends to the commands
spack install [email protected] +syclNote: to use Spack, you must have Spack installed on the system you are using and a valid Spack system configuration. Example Spack configurations are available in a separate repository: [https://github.com/ukri-bench/system-configs]
- ADD: Describe (or link to) the manual build process for a systems where
baseline performance has been measured.
- Links to multiple sub-pages of instructions can be added if they are too long to fit on this page
This section contains example performance data from selected HPC systems.
ADD: Example performance data
This benchmark description and associated files are released under the MIT license.