Skip to content

An Ansible role that installs and configures an environment for running the AiiDA common-workflows

License

Notifications You must be signed in to change notification settings

marvel-nccr/ansible-role-aiida-cws

Repository files navigation

CI Release

Ansible Role: marvel-nccr.aiida_cws

An Ansible role that installs and configures an environment for running the AiiDA common-workflows on Linux Ubuntu (with a bash shell).

The primary goal is to create an environment that a user can enter and, without any other steps, run commands like:

aiida-common-workflows launch relax quantum_espresso -S Si -X qe.pw -n 2

For all the available (open-source) simulation codes: abinit, bigdft (to come), cp2k, fleur, nwchem, qe, siesta, wannier90, yambo.

The key components are:

  • PostgreSQL is installed system wide, with auto-start service.
  • RabbitMQ is installed system wide, with auto-start service.
  • Conda installed system wide (as miniforge), with activation on terminals
  • The AiiDA python environment installed into the aiida Conda environment, including the aiida-common-workflows package, dependent plugins, and jupyterlab, then a profile is created.
  • Each simulation code, and their dependencies, are installed into their own Conda environment.
    • Then an AiiDA Code is created for each code executable
  • The aiida-pseudo package is used to install the requisite pseudo-potentials.

The of the conda package and environment manager allows for fast installation of pre-compiled simulation codes, which are isolated from each other - ensuring correct use of dependency versions and environmental variables etc - but with sharing of common dependencies across environments (by use of hard-links), ensuring optimum memory usage.

Installation

ansible-galaxy install marvel-nccr.aiida_cws

Role Variables

See defaults/main.yml

Example Playbook

- hosts: servers
  roles:
  - role: marvel-nccr.aiida_cws
    vars:
      aiida_timezone_name: Europe/Zurich  # to set a certain timezone for AiiDA
      aiida_create_swapfile: true  # create a swapfile for RAM overflow, non-containers only
      aiida_allow_mpi_on_root: true  # containers only

If you want to install SLURM and use it as the scheduler, you can use e.g.:

- hosts: servers
  roles:
  - role: marvel-nccr.slurm
  - role: marvel-nccr.aiida_cws
    vars:
      aiida_timezone_name: Europe/Zurich
      aiida_create_swapfile: true
      aiida_conda_code_computer: local_slurm_conda

Disk optimisation

To optimise the disk usage, before packaging, there are a few extra steps you can take. These are not included in the role, since they are not idempotent, and so should only be run after a full build:

- name: Run the equivalent of "apt-get clean"
  apt:
    clean: yes
- name: wipe apt lists
  become: true
  command: "rm -rf /var/lib/apt/lists/*"
- name: wipe user cache
  file:
    state: absent
    path: "~/.cache"
- name: wipe root cache
  become: true
  file:
    state: absent
    path: "~/.cache"

Usage

Environment management

Once logged in to a terminal, the base environment of Conda is activated. To control the conda environments, you can use the conda command, or the mamba command is a drop-in replacement, for faster installation of packages. FOr a brief introduction to Conda, see the getting started tutorial.

The alias listenvs (for conda env --info) can be used to list the available environments:

(base) root@instance:/# listenvs
# conda environments:
#
base                  *  /root/.conda
abinit                   /root/.conda/envs/abinit
aiida                    /root/.conda/envs/aiida
cp2k                     /root/.conda/envs/cp2k
fleur                    /root/.conda/envs/fleur
nwchem                   /root/.conda/envs/nwchem
qe                       /root/.conda/envs/qe
siesta                   /root/.conda/envs/siesta
wannier90                /root/.conda/envs/wannier90
yambo                    /root/.conda/envs/yambo

To enter an environment, use the alias workon (for conda actiavate):

(base) root@instance:/# workon aiida
(aiida) root@instance:/#

You can see what is installed in that environment, using conda list:

(aiida) root@instance:/# conda list
# packages in environment at /root/.conda/envs/aiida:
#
...
aiida-core                1.6.8              pyh6c4a22f_2    conda-forge
...
python                    3.8.13          h582c2e5_0_cpython    conda-forge
...

Running AiiDA

This exposes the installed executables, such as verdi (with tab-completion) and aiida-common-workflows:

(aiida) root@instance:/# verdi status
 ✔ config dir:  /root/.aiida
 ✔ profile:     On profile generic
 ✔ repository:  /root/.aiida/repository/generic
 ✔ postgres:    Connected as aiida@localhost:5432
 ✔ rabbitmq:    Connected to RabbitMQ v3.6.10 as amqp://guest:[email protected]:5672?heartbeat=600
 ✔ daemon:      Daemon is running as PID 18513 since 2022-07-23 18:40:31

You'll note that the general profile should already be set up, with connections to running PostgreSQL, RabbitMQ and AiiDA daemon systems services:

(aiida) root@instance:/# systemctl --type=service | grep -E '(rabbitmq|postgres|aiida)'
[email protected]  loaded active running AiiDA daemon service for profile generic
[email protected]    loaded active running PostgreSQL Cluster 10-main
rabbitmq-server.service       loaded active running RabbitMQ Messaging Server

AiiDA codes are set up to run simulation code executables:

(aiida) root@instance:/# verdi code list
# List of configured codes:
# (use 'verdi code show CODEID' to see the details)
* pk 1 - abinit.main@local_direct_conda
* pk 2 - cp2k.main@local_direct_conda
* pk 3 - fleur.main@local_direct_conda
* pk 4 - fleur.inpgen@local_direct_conda
* pk 5 - nwchem.main@local_direct_conda
* pk 6 - qe.cp@local_direct_conda
* pk 7 - qe.neb@local_direct_conda
* pk 8 - qe.ph@local_direct_conda
* pk 9 - qe.pp@local_direct_conda
* pk 10 - qe.pw@local_direct_conda
* pk 11 - siesta.main@local_direct_conda
* pk 12 - wannier90.main@local_direct_conda
* pk 13 - yambo.main@local_direct_conda

These are set up use conda run -n env_name /path/to/executable to run the executable within the correct environment.

Launching Jupyter Lab

Inside the aiida environment, you can launch Jupyter Lab with the jupyter lab command:

(aiida) root@instance:/# jupyter lab

If using the Docker container, you should add the following options:

(aiida) root@instance:/# jupyter lab  --allow-root --ip=0.0.0.0

You can manage what jupyter servers are running with:

(aiida) root@instance:/# jupyter server list

Development and testing

This role uses Molecule and Docker for tests.

After installing Docker:

Clone the repository into a package named marvel-nccr.aiida_cws (the folder must be named the same as the Ansible Galaxy name)

git clone https://github.com/marvel-nccr/ansible-role-aiida-cws marvel-nccr.aiida_cws
cd marvel-nccr.aiida_cws

Then run:

pip install -r requirements.txt  # Installs molecule
molecule test  # runs tests

or use tox (see tox.ini):

pip install tox
tox

Code style

Code style is formatted and linted with pre-commit.

pip install pre-commit
pre-commit run -all

Deployment

Deployment to Ansible Galaxy is automated via GitHub Actions. Simply tag a release vX.Y.Z to initiate the CI and release workflow. Note, the release will only complete if the CI tests pass.

License

MIT

Contact

Please direct inquiries regarding Quantum Mobile and associated ansible roles to the AiiDA mailinglist.

TODO

  • Add "User Guide" inside the build (in desktop folder)

  • move to building docker with tini as PID 1

  • jupyter lab launcher

  • rest api service

  • check everything still works with non-root user install

  • migrate tasks from marvel-nccr.simulationbase (understand hostname.yml, which is non-container only, and clean.yml)

    • double-check when to use apt: clean and apt: upgrade, etc
  • Get https://github.com/quanshengwu/wannier_tools on Conda, to replace marvel-nccr.wannier_tools

  • allow for source install of aiida-core (as previous)

  • output "raw" pseudo-potential files to aiida_data_folder_user

  • running apt: upgrade: true does actually allow for timezone to be set

  • run code tests (how to check success aiidateam/aiida-common-workflows#289?):

    • aiida-common-workflows launch relax abinit -S Si -X abinit.main -n 2
      • although issues with abipy and The netcdf library does not support parallel IO and nkpt*nsppol (29) is not a multiple of nproc_spkpt (2)
    • aiida-common-workflows launch relax cp2k -S Si -X cp2k.main -n 1
      • But -n 2 fails ❌
      • quite slow to run as well
    • aiida-common-workflows launch relax fleur -S Si -X fleur.main -n 2 2
    • aiida-common-workflows launch relax quantum_espresso -S Si -X qe.pw -n 2
    • aiida-common-workflows launch relax siesta -S Si -X siesta.main -n 2
    • aiida-common-workflows launch relax nwchem -S Si -X nwchem.main -n 2
      • The stdout output file was incomplete.
    • awaiting conda packages:
      • aiida-common-workflows launch relax bigdft ... (bigdft conda-forge/staged-recipes#19683) ❓
      • yambo can also be installed but not in common workflows to test. (aiida-yambo) ❓
  • aiida-gaussian and aiida-common-workflow need to update their dependency pinning of pymatgen to allow 2022, to be compatible with the latest abipy=0.9, also abipy should be a direct dependency of aiida-abinit.

  • for common workflow group:

    • maintainership of feedstocks
    • confirm ranges of code versions compatibilities
    • check on use of OMP_NUM_THREADS=1 (and other env vars)
    • get all to insure they are using latest pymatgen (2022), see https://pymatgen.org/compatibility.html
    • update to aiida-core v2

About

An Ansible role that installs and configures an environment for running the AiiDA common-workflows

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages