This repository contains the Ansible configuration for the Artemis Ansible Collection, which is responsible for setting up the TUM Artemis production, staging, and test environments.
If you need a configuration change for one of the environments, follow these steps:
-
Check for an existing configuration option
First, verify whether the required configuration option already exists in the Ansible collection. -
Modify the Ansible collection if needed
If no existing configuration option is available, update the Ansible collection first and submit a pull request to that repository. -
Apply the configuration change
Once the configuration option is available, update the relevanthost_varsorgroup_varsfiles in this repository and open a pull request.
After review, we will apply the new configuration to the respective environment.
Note: This section is relevant only for members of the admin team who deploy changes to the Artemis environments.
It is recommended to install Ansible and ansible-lint inside a virtual environment:
python3 -m venv venv
source venv/bin/activate # Adjust command if using fish (.fish) or csh (.csh)
pip3 install -r requirements.txt
ansible-galaxy collection install -r requirements.yml --force
ansible-galaxy install -r requirements.yml --force
ansible-galaxy install -r ~/.ansible/collections/ansible_collections/ls1intum/artemis/requirements.ymlTo update the Ansible collection, run the last four commands again.
To ensure proper SSH access, append the following lines to the very bottom of your ~/.ssh/config file
Host *
Hostname %h
User <TUMID>
Note:
This section is relevant only for members of the admin team who deploy changes to the Artemis environments.
source venv/bin/activate # Adjust command if using fish (.fish) or csh (.csh)Once activated, you can run Ansible commands.
We manage secrets using Hashicorp Vault. The base configuration is already set up in this repository's Ansible configuration.
Hashicorp provides comprehensive documentation and tutorials on how to use Vault.
- Vault CLI must be installed
- You must be connected to the AET VPN and have admin permissions to access vault.
source set_vault.sh # Use set_vault.fish for fish shellexport VAULT_ADDR="https://vault.aet.cit.tum.de"
vault login --method=oidc role=itg-artemis-adminAfter login, a token will be printed in the command output. Export it for Ansible:
export VAULT_TOKEN=hvs.<token>ansible-lintansible-playbook playbooks/<server>/nodes-version-update.yml -e artemis_version=<version> # For test & staging servers
ansible-playbook playbooks/artemis-production/production-nodes-version-update.yml -e artemis_version=<version> # For productionThe <version> variable can be set to a specific GitHub release version (e.g., 8.0.0) or to an absolute path to a local Artemis executable (e.g., /home/user/Artemis.war).
Modify the necessary variables in host_var or group_var, then apply the changes.
The test server host configs are split up into different files (e.g., integrated code lifecycle, LocalVC/Jenkins, and common config). These configurations are managed using Ansible host groups. Each test server is assigned multiple host groups based on its setup. The mappings are defined in the hosts file.
The artemis_prod_like_*.yml files contain shared configurations for all production-like servers (production and staging servers).
The artemis_production*.yml, artemis_staging1*.yml, and artemis_staging_localci*.yml files contain configurations specific to the respective instance.
For native servers:
ansible-playbook playbooks/<server>/nodes-update-config.yml --diff --check # For native staging servers
ansible-playbook playbooks/artemis-production/production-nodes-update-config.yml --diff --check # For production serversAfter review, run the same command without --check to apply the changes.
For Docker-based test servers:
ansible-playbook playbooks/artemis-tests/artemis-tests.yml --diff --checkAfter review, run the same command without --check to apply the changes.
Warning: This will restart and redeploy the Docker containers.
The registry is accessible within the AET VPN. Use the domain of the registry host to open it in a browser. Credentials are stored in Vault.
Databases are accessible only from the database hosts or the WireGuard network. To connect, set up an SSH tunnel.
The passwords for the user are stored in Vault.
The username is configured in the group_vars files.
We recommend using a tool like DataGrip to connect to the database. Otherwise, you can manually create an SSH tunnel.
ssh -L 36306:127.0.0.1:3306Once the tunnel is active, the database is accessible on port 36306 on your local machine