|
3 | 3 | Hyperion Service |
4 | 4 | ---------------- |
5 | 5 |
|
6 | | -Hyperion extends Artemis with AI-assisted authoring features for programming exercises. It offers |
7 | | -consistency checks for problem statements and exercise artefacts and can rewrite instructions with the help |
8 | | -of generative AI. The functionality is provided entirely by Artemis and Spring AI, so no |
9 | | -EduTelligence service needs to be deployed. |
10 | | - |
11 | | -Prerequisites |
12 | | -^^^^^^^^^^^^^ |
13 | | - |
14 | | -- A running Artemis instance that loads the ``core`` profile. |
15 | | -- Network access to an LLM provider that is supported by Spring AI (for example OpenAI or Azure OpenAI). |
16 | | -- A valid API key for the chosen provider. |
17 | | - |
18 | | -Enable the Hyperion module |
19 | | -^^^^^^^^^^^^^^^^^^^^^^^^^^ |
20 | | - |
21 | | -Hyperion is disabled by default. Activate it by overriding the ``artemis.hyperion.enabled`` property in the |
22 | | -configuration that the server reads on startup (for example ``application-prod.yml``). |
23 | | - |
24 | | -.. code:: yaml |
25 | | -
|
26 | | - artemis: |
27 | | - hyperion: |
28 | | - enabled: true |
29 | | -
|
30 | | -
|
31 | | -Configure Spring AI |
32 | | -^^^^^^^^^^^^^^^^^^^^ |
33 | | - |
34 | | -Hyperion delegates all model interactions to Spring AI. Configure exactly one provider; Artemis currently |
35 | | -ships the Azure OpenAI starter, but classic OpenAI endpoints work as well when configured through Spring AI. |
36 | | - |
37 | | -OpenAI |
38 | | -"""""" |
39 | | - |
40 | | -.. code:: yaml |
41 | | -
|
42 | | - spring: |
43 | | - ai: |
44 | | - azure: |
45 | | - openai: |
46 | | - open-ai-api-key: <openai-api-key> # automatically sets the azure endpoint to https://api.openai.com/v1 |
47 | | - chat: |
48 | | - options: |
49 | | - deployment-name: gpt-5-mini # Or another (reasonably capable) model |
50 | | - temperature: 1.0 # Required to be 1.0 for gpt-5 |
51 | | -
|
52 | | -Azure OpenAI |
53 | | -"""""""""""" |
54 | | - |
55 | | -.. code:: yaml |
56 | | -
|
57 | | - spring: |
58 | | - ai: |
59 | | - azure: |
60 | | - openai: |
61 | | - api-key: <azure-openai-api-key> |
62 | | - endpoint: https://<your-resource-name>.openai.azure.com |
63 | | - chat: |
64 | | - options: |
65 | | - deployment-name: <azure-deployment> # gpt-5-mini deployment recommended |
66 | | - temperature: 1.0 # Required to be 1.0 for gpt-5 |
67 | | -
|
68 | | -Verifying the integration |
69 | | -^^^^^^^^^^^^^^^^^^^^^^^^^ |
70 | | - |
71 | | -1. Restart the Artemis server and confirm that ``hyperion`` appears in ``activeModuleFeatures`` on |
72 | | - ``/management/info``. |
73 | | -2. Log in as an instructor and open the programming exercise problem statement editor. New Hyperion actions |
74 | | - appear in the markdown editor toolbar (rewrite and consistency check). |
75 | | -3. Run a consistency check to ensure the LLM call succeeds. Inspect the server logs for ``Hyperion`` entries |
76 | | - if the request fails; misconfigured credentials and missing network egress are the most common causes. |
77 | | - |
78 | | -Operational considerations |
79 | | -^^^^^^^^^^^^^^^^^^^^^^^^^^ |
80 | | - |
81 | | -- **Cost control:** Define usage policies and rate limits with your provider. Hyperion requests can process the |
82 | | - full problem statement, so costs scale with exercise size. |
83 | | -- **Data protection:** Model providers receive exercise content. Obtain consent and align with institutional |
84 | | - policies before enabling Hyperion in production. |
| 6 | +.. note:: |
| 7 | + This documentation has been migrated to the new documentation system. |
| 8 | + Please refer to the `Hyperion Setup Guide <https://ls1intum.github.io/Artemis/admin/hyperion>`_ in the new documentation. |
0 commit comments