-
Notifications
You must be signed in to change notification settings - Fork 54
Installation on ECMWF Atos
The default gnu compiler version is 8.3, this recipe uses the newer 11.2.0.
Optionally, to use faster but potentially less accurate single-precision floating point numbers, add
-DPOIS_PRECISION=32 -DFIELD_PRECISION=32
on the cmake command line below.
export SYST=gnu-fast
module load prgenv/gnu
module load gcc/11.2.0
module load openmpi
module load cmake/3.20.2
module load netcdf4/4.7.4
module load fftw/3.3.9
git clone https://github.com/dalesteam/dales.git
cd dales
mkdir build
cd build
cmake .. -DUSE_FFTW=True \
-DFFTWF_LIB=$FFTW_DIR/lib/libfftw3f.a \
-DFFTW_LIB=$FFTW_DIR/lib/libfftw3.a -DFFTW_INCLUDE_DIR=$FFTW_DIR/include
make -j 8
Using hyper-threading does not seem beneficial. I got the best results with 128 MPI tasks per node. When restarting a run, the init*latest symlinks are not reliably created. Specify the actual file name with a time stamp in namoptions.
not fully tested
#!/bin/bash
#SBATCH --qos=nf # nf fractional queue np parallel queue np for > 1/2 node
#SBATCH -t 24:00:00
#SBATCH -n 24 #total number of tasks, number of nodes calculated automatically
#SBATCH -A <your-project-name>
#SBATCH --mem-per-cpu=1G
module load prgenv/gnu
module load gcc/11.2.0
module load openmpi
module load cmake/3.20.2
module load netcdf4/4.7.4
module load fftw/3.3.9
DALES=<path-to-your-dales>
NAMOPTIONS=namoptions.001
srun $DALES $NAMOPTIONS | tee -a output.txt
not fully tested
#!/bin/bash
#SBATCH --qos=np # nf fractional queue np parallel queue np for > 1/2 node
#SBATCH -t 24:00:00
#SBATCH -n 512 #total number of tasks, number of nodes calculated automatically
#SBATCH -A <your-project-name>
#SBATCH --ntasks-per-node=128
module load prgenv/gnu
module load gcc/11.2.0
module load openmpi
module load cmake/3.20.2
module load netcdf4/4.7.4
module load fftw/3.3.9
DALES=<path-to-your-dales>
NAMOPTIONS=namoptions.001
srun $DALES $NAMOPTIONS | tee -a output.txt
The following bash script looks for restart files in the current directory.
If it finds any, it picks the one with the highest time stamp
and edits the namoptions file to perform a restart with that file name.
If no restart files are found, it edits namoptions to set lwarmstart = .false.
.
# find restart file with latest time stamp, if it exists
STARTFILE=`ls initd*h*mx000y000.001 | tail -n 1`
if [ ! -z "$STARTFILE" ]
then
# edit lwarmstart to true
sed -i -r "s/lwarmstart.*=.*/lwarmstart = .true./" $NAMOPTIONS
# edit startfile
sed -i -r "s/startfile.*=.*/startfile = \"$STARTFILE\"/" $NAMOPTIONS
else
# edit lwarmstart to false
sed -i -r "s/lwarmstart.*=.*/lwarmstart = .false./" $NAMOPTIONS
fi
CDO version 2.0.4 and newer handle merging of DALES netCDF tiles much better than older versions. This
CDO version is not yet installed but can be installed from source.
cd $PERM
mkdir src
cd src
wget https://code.mpimet.mpg.de/attachments/download/26823/cdo-2.0.5.tar.gz
module load prgenv/gnu
module load gcc/11.2.0
module load netcdf4/4.7.4
tar -xzf cdo-2.0.5.tar.gz
cd cdo-2.0.5/
./configure --with-netcdf=`nc-config --prefix` --prefix=$PERM/local
make -j 8
make install
Run as
$PERM/local/bin/cdo
These steps were tested on the TEMS test system
in May 2021 using the git branches v4.3
(current default branch) and to4.4_Fredrik
.
module load prgenv/gnu
module load openmpi
module load cmake/3.19.5
module load netcdf4/4.7.4
module load fftw/3.3.9
export SYST=gnu-fast
git clone https://github.com/dalesteam/dales.git
cd dales
mkdir build
cd build
cmake .. # -DUSE_FFTW=True
make -j 4
Note: with the optional -DUSE_FFTW=True
,
FFTW is not found automatically. Edit the CMakeLists or set environment variables.
The lib and include paths can be found with module show fftw
Sample job script. Starts dales in the directory where the job was submitted. To run somewhere else, use the --chdir=
option
or add a cd
command in the script.
#!/bin/bash
#SBATCH --job-name=dales
#SBATCH --qos=np
#SBATCH --nodes=1
#SBATCH --ntasks=128
#SBATCH --time=24:0:0
# other SBATCH options :
# --output=test-mpi.%j.out
# --error=test-mpi.%j.out
# --chdir=/scratch...
# --mem-per-cpu=100
# --account=<PROJECT-ID>
# modules here should match what was used during compile
module load prgenv/gnu
module load openmpi
module load cmake/3.19.5
module load netcdf4/4.7.4
module load fftw/3.3.9
NAMOPTIONS=namoptions.001
DALES=$HOME/dales/build/src/dales4
CASE=`pwd`
echo DALES $DALES
echo CASE $CASE
echo hostname `hostname`
# optionally edit nprocx, nprocy in namelist
#NX=8
#NY=16
#sed -i -r "s/nprocx.*=.*/nprocx = $NX/;s/nprocy.*=.*/nprocy = $NY/" $NAMOPTIONS
srun $DALES $NAMOPTIONS | tee output.txt
module load prgenv/intel
module load intel-mpi
module load cmake/3.19.5
module load netcdf4/4.7.4
module load fftw/3.3.9
export SYST=lisa-intel
Quick single-node benchmarking shows gnu Fortran being 13% faster. gnu 8.3 (default) and 10.2 (newest) are very similar.