MPI is the de-facto standard for inter-node communication on HPC systems, and has been for the past 25 years. While highly successful, MPI is a standard for source code (it defines an API), and is not a standard defining binary compatibility (it does not define an ABI). This means that applications running on HPC systems need to be compiled anew on every system. This is tedious, since the software that is available on every HPC system is slightly different.
This project attempts to remedy this. It defines an ABI for MPI, and provides an MPI implementation based on this ABI. That is, MPItrampoline does not implement any MPI functions itself, it only forwards them to a "real" implementation via this ABI. The advantage is that one can produce "portable" applications that can use any given MPI implementation. For example, this will make it possible to build external packages for Julia via Yggdrasil that run efficiently on almost any HPC system.
A small and simple MPIwrapper library is used to provide this ABI for any given MPI installation. MPIwrapper needs to be compiled for each MPI installation that is to be used with MPItrampoline, but this is quick and easy.
- Debian 11.0 via Docker (MPICH; arm32v5, arm32v7, arm64v8, mips64le, ppc64le, riscv64; C/C++ only)
- Debian 11.0 via Docker (MPICH; i386, x86-64)
- macOS laptop (MPICH, OpenMPI; x86-64)
- macOS via Github Actions (OpenMPI; x86-64)
- Ubuntu 20.04 via Docker (MPICH; x86-64)
- Ubuntu 20.04 via Github Actions (MPICH, OpenMPI; x86-64)
- Blue Waters, HPC system at the NCSA (Cray MPICH; x86-64)
- Graham, HPC system at Compute Canada (Intel MPI; x86-64)
- Marconi A3, HPC system at Cineca (Intel MPI; x86-64)
- Niagara, HPC system at Compute Canada (OpenMPI; x86-64)
- Summit, HPC system at ORNL (Spectrum MPI; IBM POWER 9)
- Symmetry, in-house HPC system at the Perimeter Institute (MPICH, OpenMPI; x86-64)
Install MPIwrapper, wrapping the MPI installation you want to use there. You can install MPIwrapper multiple times if you want to wrap more than one MPI implementation.
This is possibly as simple as
cmake -S . -B build -DMPIEXEC_EXECUTABLE=mpiexec -DCMAKE_BUILD_TYPE=RelWithDebInfo -DCMAKE_INSTALL_PREFIX=$HOME/mpiwrapper
cmake --build build
cmake --install build
but nothing is ever simple on an HPC system. It might be necessary to load certain modules, or to specify more cmake MPI configuration options.
The MPIwrapper libraries remain on the HPC system, they are installed independently of any application.
Build your application as usual, using MPItrampline as MPI library.
At startup time, MPItrampoline needs to be told which MPIwrapper
library to use. This is done via the environment variable
MPITRAMPOLINE_LIB
. You also need to point MPItrampoline's mpiexec
to a respective wrapper created by MPIwrapper, using the environment
variable MPITRAMPOLINE_MPIEXEC
.
For example:
env MPITRAMPOLINE_MPIEXEC=$HOME/mpiwrapper/bin/mpiwrapper-mpiexec MPITRAMPOLINE_LIB=$HOME/mpiwrapper/lib/libmpiwrapper.so mpiexec -n 4 ./your-application
The mpiexec
you run here needs to be the one provided by MPItrampoline.
MPItrampoline uses the C preprocessor to create wrapper functions for
each MPI function. This is how MPI_Send
is wrapped:
FUNCTION(int, Send,
(const void *buf, int count, MT(Datatype) datatype, int dest, int tag,
MT(Comm) comm),
(buf, count, (MP(Datatype))datatype, dest, tag, (MP(Comm))comm))
Unfortunately, MPItrampoline does not yet wrap the Fortran API. Your help is welcome.
Certain MPI types, constants, and functions are difficult to wrap. Theoretically, there could be MPI libraries where it is not possible to implement the current MPI ABI. If you encounter this, please let me know -- maybe there is a work-around.