Skip to content

Conversation

@aleramos119
Copy link
Contributor

@aleramos119 aleramos119 commented Oct 21, 2025

Description

Added characterizer module for noise parameter optimization in open quantum systems

Checklist:

  • The pull request only contains commits that are focused and relevant to this change.
  • I have added appropriate tests that cover the new/changed functionality.
  • I have updated the documentation to reflect these changes.
  • I have added entries to the changelog for any noteworthy additions, changes, fixes, or removals.
  • I have added migration instructions to the upgrade guide (if needed).
  • The changes follow the project's style guidelines and introduce no new warnings.
  • The changes are fully tested and pass the CI checks.
  • I have reviewed my own code changes.

Summary by CodeRabbit

  • New Features

    • CompactNoiseModel for concise multi-site noise representation
    • Zero gate and new gate aliases (raising, lowering, pauli_x, pauli_y, pauli_z)
    • Noise-characterization suite: propagator with gradients, loss/optimization utilities, trapezoidal integrator, Adam optimizer with checkpoint/restart, and Characterizer orchestration
  • Tests

    • Comprehensive unit tests covering compact noise models, zero gate, propagation, optimization, checkpointing, and restart behavior
  • Documentation

    • Example walkthrough demonstrating the characterization and optimization workflow
  • Chores

    • Removed pytest auto-detection parallelism flag from test configuration

MaxFroehlich1410 and others added 30 commits February 19, 2025 15:38
force the parameters to be positives
force the parameters to be positive
d_On_d_gk has shape (n_jump_site, n_obs_site, L, n_t)
loss_function changed accordingly
aleramos119 and others added 6 commits October 21, 2025 22:43
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Signed-off-by: Alejandro <[email protected]>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Signed-off-by: Alejandro <[email protected]>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

♻️ Duplicate comments (13)
src/mqt/yaqs/core/libraries/gate_library.py (1)

1756-1761: Duplicate: Documentation still needed for aliases.

As noted in the previous review, these aliases should be documented in the GateLibrary docstring to help users understand the multiple names available for the same gates. The inline comment on line 1756 is helpful but insufficient for API documentation.

src/mqt/yaqs/noise_char/optimization.py (11)

14-14: Remove unused noqa on pickle import.

Ruff flags unused suppression; keep S301 only at load site.

-import pickle  # noqa: S403
+import pickle
#!/bin/bash
rg -n "noqa:\s*S403" -S

78-79: Vectorize cumulative trapezoid for speed and clarity.

-    for i in range(1, len(y)):
-        integral[i] = integral[i - 1] + 0.5 * (x[i] - x[i - 1]) * (y[i] + y[i - 1])
+    dx = np.diff(x)
+    avg_y = 0.5 * (y[1:] + y[:-1])
+    integral[1:] = np.cumsum(dx * avg_y, dtype=np.float64)

204-217: reset() leaves stale grad_history. Clear it.

     def reset(self) -> None:
@@
         self.x_avg_history = []
         self.diff_avg_history = []
+        self.grad_history = []

303-321: Fix write_traj docstring to match signature and output.

-    """Saves the optimized trajectory of expectation values to a text file.
-
-    This method reshapes the `exp_vals_traj` array, concatenates the time array `self.t` as the first row,
-    and writes the resulting data to a file named `opt_traj_{self.n_eval}.txt` in the working directory.
-    The file includes a header with time and observable labels.
-    The output file format:
-        - Each column corresponds to a time point or an observable at a specific site.
-        - The first column is time (`t`).
-        - Subsequent columns are labeled as `x0`, `y0`, `z0`, ..., up to the number of observed
-        sites and system size.
-    Attributes used:
-        exp_vals_traj (np.ndarray): Array of expectation values with shape (n_obs_site, sites, n_t).
-        t (np.ndarray): Array of time points.
-        work_dir (str): Directory where the output file will be saved.
-        n_eval (int): Evaluation index used in the output filename.
-    File saved:
-        {work_dir}/opt_traj_{n_eval}.txt.
-    """
+    """Save trajectories to a text file.
+
+    Prepends the time array `self.t` as the first row to `obs_array` (shape: (n_obs, n_t))
+    and writes the transposed data to `output_file`. Header columns: "t", then "obs_0..obs_{n_obs-1}".
+    """

360-366: call docstring lists 4 return values; code returns 3. Align docs.

-        Returns:
-            tuple:
-                - f (float): The value of the objective function (sum of squared differences).
-                - grad (np.ndarray): The gradient of the objective function with respect to gamma parameters.
-                - sim_time (float): The time taken to run the simulation (in seconds).
-                - avg_min_max_traj_time (Any): Average, minimum and maximum trajectory running times.
+        Returns:
+            tuple:
+                - f (float): Objective value (sum of squared differences).
+                - grad (np.ndarray): Gradient w.r.t. gamma parameters.
+                - sim_time (float): Simulation time in seconds.

406-407: Ensure grad length equals self.d.

-        grad = np.bincount(self.traj_gradients.compact_noise_model.index_list, weights=grad_vec)
+        grad = np.bincount(
+            self.traj_gradients.compact_noise_model.index_list,
+            weights=grad_vec,
+            minlength=self.d,
+        )

436-461: Docstring types/names drift in adam_optimizer.

-        f (loss_class): An instance of a loss function class.
+        f (LossClass): An instance of the loss function class.
@@
-        restart_file (str, optional): Path to a specific checkpoint file to restart from.
+        restart_file (Path | None, optional): Path to a specific checkpoint file to restart from.
@@
-            np.ndarray,  # f.exp_vals_traj: Optimized trajectory of expectation values.
+            np.ndarray,  # f.obs_array: Optimized trajectory of expectation values.

487-504: Unsafe unpickling from arbitrary paths. Constrain and validate restart files.

-    if restart and restart_file is not None:
-        if not restart_file.exists():
+    if restart and restart_file is not None:
+        # Constrain to the working directory and expected naming to reduce RCE risk.
+        base_dir = Path(restart_dir).resolve()
+        try:
+            restart_file = restart_file.resolve(strict=True)
+        except FileNotFoundError:
             msg = "Restart file not found."
             raise ValueError(msg)
+        if not restart_file.is_file():
+            raise ValueError("Restart file must be a regular file.")
+        if not restart_file.is_relative_to(base_dir):
+            raise ValueError("Restart file must reside inside the working directory.")
+        if not (restart_file.name.startswith("restart_step_") and restart_file.suffix == ".pkl"):
+            raise ValueError("Invalid restart checkpoint filename.")
         # We only load restart files we created ourselves; no untrusted input here.
         with restart_file.open("rb") as handle:
             saved = pickle.load(handle)  # noqa: S301

504-511: Overbroad deletion: removes every .pkl in work dir. Restrict to our checkpoints.

-        # Remove all .pkl files in the folder
+        # Remove only our checkpoint files (restart_step_*.pkl) in the working directory
         restart_path = Path(restart_dir)
         for file_path in restart_path.iterdir():
-            if file_path.suffix == ".pkl":
-                with contextlib.suppress(Exception):
-                    file_path.unlink()
+            name = file_path.name
+            if file_path.is_file() and name.startswith("restart_step_") and file_path.suffix == ".pkl":
+                with contextlib.suppress(OSError):
+                    file_path.unlink()

520-523: Perf log header/data mismatch and formatting.

-        with perf_path.open("w", encoding="utf-8") as pf:
-            pf.write("# iter    opt_step_time    simulation_time    avg_traj_time    min_traj_time    max_traj_time\n")
+        with perf_path.open("w", encoding="utf-8") as pf:
+            pf.write("# iter    opt_step_time    simulation_time\n")
@@
-        with perf_path.open("a", encoding="utf-8") as pf:
-            pf.write(f"  {i}    {iter_time}    {sim_time}  \n")
+        with perf_path.open("a", encoding="utf-8") as pf:
+            pf.write(f"{i:5d}    {iter_time:.6f}    {sim_time:.6f}\n")

Also applies to: 576-577


545-547: Use in-place np.clip instead of two passes.

-        # Ensure non-negativity for the parameters
-        x[x < 0] = 0
-        x[x > 1] = 1
+        # Clip parameters to [0, 1]
+        np.clip(x, 0.0, 1.0, out=x)
tests/noise_char/test_optimization.py (1)

14-14: Remove unused noqa on pickle import in tests.

-import pickle  # noqa: S403
+import pickle
📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9a1d51f and c97c9e7.

📒 Files selected for processing (4)
  • src/mqt/yaqs/core/libraries/gate_library.py (3 hunks)
  • src/mqt/yaqs/noise_char/optimization.py (1 hunks)
  • tests/noise_char/test_optimization.py (1 hunks)
  • tests/noise_char/test_propagation.py (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
src/mqt/yaqs/noise_char/optimization.py (3)
src/mqt/yaqs/core/data_structures/noise_model.py (1)
  • CompactNoiseModel (170-254)
src/mqt/yaqs/core/data_structures/simulation_parameters.py (1)
  • Observable (41-129)
src/mqt/yaqs/noise_char/propagation.py (3)
  • PropagatorWithGradients (44-347)
  • set_observable_list (198-232)
  • run (234-347)
tests/noise_char/test_optimization.py (6)
src/mqt/yaqs/core/data_structures/networks.py (3)
  • MPO (958-1379)
  • MPS (36-955)
  • init_ising (991-1044)
src/mqt/yaqs/core/data_structures/noise_model.py (1)
  • CompactNoiseModel (170-254)
src/mqt/yaqs/core/data_structures/simulation_parameters.py (2)
  • AnalogSimParams (132-277)
  • Observable (41-129)
src/mqt/yaqs/core/libraries/gate_library.py (5)
  • X (609-628)
  • Y (631-650)
  • Z (653-672)
  • x (286-292)
  • y (295-301)
src/mqt/yaqs/noise_char/optimization.py (10)
  • LossClass (84-412)
  • trapezoidal (30-81)
  • reset (204-216)
  • set_history (218-246)
  • compute_avg (143-157)
  • compute_diff_avg (159-167)
  • write_traj (303-327)
  • set_work_dir (248-263)
  • write_to_file (265-301)
  • adam_optimizer (415-588)
src/mqt/yaqs/noise_char/propagation.py (3)
  • PropagatorWithGradients (44-347)
  • set_observable_list (198-232)
  • run (234-347)
🪛 Ruff (0.14.1)
src/mqt/yaqs/noise_char/optimization.py

14-14: Unused noqa directive (non-enabled: S403)

Remove unused noqa directive

(RUF100)

tests/noise_char/test_optimization.py

14-14: Unused noqa directive (non-enabled: S403)

Remove unused noqa directive

(RUF100)

⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: 🐍 Test (windows-2022) / 🐍 windows-2022
  • GitHub Check: 🐍 Test (macos-15-intel) / 🐍 macos-15-intel
🔇 Additional comments (6)
src/mqt/yaqs/core/libraries/gate_library.py (3)

352-359: LGTM!

The zero() classmethod follows the established pattern for gate factory methods and is properly documented.


773-793: LGTM! Zero gate added for noise modeling.

The Zero gate implementation is clean and follows the established pattern. Note that this is a non-unitary operator (all-zeros matrix) intended for noise characterization and Lindblad operator construction rather than as a physical quantum gate. This is appropriate for the PR's purpose of noise parameter optimization in open quantum systems.


1754-1754: LGTM!

The zero attribute properly exposes the Zero class in the GateLibrary, consistent with other gate registrations.

tests/noise_char/test_propagation.py (2)

159-182: LGTM! Well-structured propagation test.

The test correctly validates that PropagatorWithGradients executes successfully and returns arrays with expected shapes. The assertions comprehensively check both types and dimensions.


184-248: LGTM! Comprehensive error validation testing.

The test thoroughly validates error handling across multiple failure scenarios: invalid site indices for noise and observables, missing preconditions, and runtime model mismatches. Each assertion uses proper regex escaping and clear error messages.

src/mqt/yaqs/noise_char/optimization.py (1)

551-553: Name clarity: x_loss actually stores pre-update x.

-            "x_loss": x + update,
+            "x_before_update": x + update,

Optional follow-up: update any consumer accordingly (none in current code).

Likely an incorrect or invalid review comment.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (1)
tests/noise_char/test_optimization.py (1)

13-13: Remove unused noqa directive on pickle import.

Ruff flags this as an unused noqa; tests can import pickle without suppression.

-import pickle  # noqa: S403
+import pickle
📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 770be2f and e063941.

📒 Files selected for processing (1)
  • tests/noise_char/test_optimization.py (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
tests/noise_char/test_optimization.py (6)
src/mqt/yaqs/core/data_structures/networks.py (3)
  • MPO (958-1379)
  • MPS (36-955)
  • init_ising (991-1044)
src/mqt/yaqs/core/data_structures/noise_model.py (1)
  • CompactNoiseModel (170-253)
src/mqt/yaqs/core/data_structures/simulation_parameters.py (2)
  • AnalogSimParams (132-277)
  • Observable (41-129)
src/mqt/yaqs/core/libraries/gate_library.py (5)
  • X (609-628)
  • Y (631-650)
  • Z (653-672)
  • x (286-292)
  • y (295-301)
src/mqt/yaqs/noise_char/optimization.py (10)
  • LossClass (84-412)
  • trapezoidal (30-81)
  • reset (204-216)
  • set_history (218-246)
  • compute_avg (143-157)
  • compute_diff_avg (159-167)
  • write_traj (303-327)
  • set_work_dir (248-263)
  • write_to_file (265-301)
  • adam_optimizer (415-588)
src/mqt/yaqs/noise_char/propagation.py (3)
  • PropagatorWithGradients (44-347)
  • set_observable_list (198-232)
  • run (234-347)
🪛 Ruff (0.14.1)
tests/noise_char/test_optimization.py

13-13: Unused noqa directive (non-enabled: S403)

Remove unused noqa directive

(RUF100)

Moved PropagatorWithGradients and LossClass initialization outside Characterizer initialization.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (1)
src/mqt/yaqs/noise_char/characterizer.py (1)

284-284: Guard against empty x_avg_history before indexing

Indexing self.x_avg_history[-1] can raise IndexError when optimization halts before any average is computed (e.g., max_iterations=0, early exit). Add a defensive check and surface a clear error.

-        self.optimal_model = self.loss.x_to_noise_model(self.x_avg_history[-1])
+        if not self.x_avg_history:
+            raise RuntimeError(
+                "Optimization produced no averaged parameters; cannot reconstruct optimal model."
+            )
+        self.optimal_model = self.loss.x_to_noise_model(self.x_avg_history[-1])
📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e063941 and c9c8ce6.

📒 Files selected for processing (1)
  • src/mqt/yaqs/noise_char/characterizer.py (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/mqt/yaqs/noise_char/characterizer.py (3)
src/mqt/yaqs/noise_char/optimization.py (3)
  • adam_optimizer (415-588)
  • LossClass (84-412)
  • x_to_noise_model (329-343)
src/mqt/yaqs/core/data_structures/noise_model.py (1)
  • CompactNoiseModel (170-253)
src/mqt/yaqs/noise_char/propagation.py (1)
  • PropagatorWithGradients (44-347)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: 🐍 Test (windows-2022) / 🐍 windows-2022
  • GitHub Check: 🐍 Test (macos-15-intel) / 🐍 macos-15-intel
🔇 Additional comments (1)
src/mqt/yaqs/noise_char/characterizer.py (1)

160-188: Verify object consistency between loss and traj_gradients

To avoid subtle mismatches, ensure loss.traj_gradients refers to the same configured propagator (or assert compatibility).

Would you add a brief assertion here?

         self.init_guess = copy.deepcopy(init_guess)
         self.traj_gradients = traj_gradients
 
         self.loss = loss
+        # Optional safety: ensure both objects are wired to the same propagator.
+        try:
+            assert getattr(self.loss, "traj_gradients", None) is not None
+            assert self.loss.traj_gradients.compact_noise_model.compact_processes == \
+                   self.traj_gradients.compact_noise_model.compact_processes
+        except AssertionError:
+            raise ValueError("loss.traj_gradients is not compatible with traj_gradients passed to Characterizer.")

If you prefer not to assert, please confirm tests cover mismatched wiring scenarios.

Comment on lines 140 to 159
loss: LossClass

traj_gradients: PropagatorWithGradients

init_guess: CompactNoiseModel

init_x: np.ndarray

loss_history: list[float]

x_history: list[np.ndarray]

x_avg_history: list[np.ndarray]

times: np.ndarray

observable_traj: np.ndarray

optimal_model: CompactNoiseModel

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Initialize optional fields or annotate as Optional to reflect runtime state

These attributes are only set after optimization; initialize them and widen types to Optional for correctness and IDEs.

-    init_x: np.ndarray
-    loss_history: list[float]
-    x_history: list[np.ndarray]
-    x_avg_history: list[np.ndarray]
-    times: np.ndarray
-    observable_traj: np.ndarray
-    optimal_model: CompactNoiseModel
+    init_x: np.ndarray
+    loss_history: list[float]
+    x_history: list[np.ndarray]
+    x_avg_history: list[np.ndarray]
+    times: np.ndarray | None
+    observable_traj: np.ndarray | None
+    optimal_model: CompactNoiseModel | None

And in init (after setting self.init_x):

+        self.loss_history = []
+        self.x_history = []
+        self.x_avg_history = []
+        self.times = None
+        self.observable_traj = None
+        self.optimal_model = None

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In src/mqt/yaqs/noise_char/characterizer.py around lines 140 to 159, the
attributes declared for post-optimization state (loss_history, x_history,
x_avg_history, times, observable_traj, optimal_model, traj_gradients, loss,
init_guess) are only set after running optimization — change their type
annotations to Optional[...] (e.g. Optional[list[float]], Optional[np.ndarray],
Optional[CompactNoiseModel], etc.) and initialize them to None in __init__
immediately after setting self.init_x; also add/import typing.Optional at the
top. This ensures the fields reflect runtime state and prevents IDE/type-checker
errors.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
tests/noise_char/test_characterizer.py (1)

104-158: Fix multiple docstring inaccuracies.

The docstring contains several errors:

  1. tmp_path is used: Lines 129-130 and 150-151 claim tmp_path is "not used", but line 194 passes it to LossClass as working_dir.
  2. Wrong parameter order in example: Line 156 shows create_instances(tmp_path, test) but the signature is create_instances(test: Parameters, tmp_path: Path).
  3. Return type mismatch: Lines 134-144 document a 6-tuple return, but line 196 returns 7 items (including loss), and line 106's type annotation correctly shows 7 items.
  4. Example unpacks wrong number of items: Line 156 unpacks 6 items but should unpack 7.

Apply this diff to correct the docstring:

     tmp_path : Path
-        Temporary directory path for any required output (not used in current implementation).
+        Temporary directory path for LossClass working directory.
 
     Returns:
     -------
-    tuple[MPO, MPS, list[Observable], AnalogSimParams, CompactNoiseModel, PropagatorWithGradients]
-        A 6-tuple containing:
+    tuple[MPO, MPS, list[Observable], AnalogSimParams, CompactNoiseModel, PropagatorWithGradients, LossClass]
+        A 7-tuple containing:
         - h_0 (MPO): Ising Hamiltonian MPO initialized with (sites, j, g).
         - init_state (MPS): initial MPS in the "zeros" product state.
         - obs_list (list[Observable]): list of single-site X, Y and Z observables for all sites.
         - sim_params (AnalogSimParams): simulation parameters assembled from `test` and `obs_list`.
         - ref_noise_model (CompactNoiseModel): compact noise model with "lowering" and "pauli_z"
           channels applied to all sites, using strengths `gamma_rel` and `gamma_deph`.
         - propagator (PropagatorWithGradients): propagator configured with the Hamiltonian,
           noise model and initial state; observables are set and the propagation has been run
           with `ref_noise_model` prior to return.
+        - loss (LossClass): loss function configured with the reference trajectory and propagator.
 
     Notes:
     -----
     - The function has the side effect of running the propagator (propagator.run(...)),
       so returned `propagator` already contains results of the propagation.
-    - `tmp_path` is present to match common pytest fixture signatures but is not used
-      in the current implementation; future versions may use it for temporary output.
 
     Examples:
     --------
     >>> # Typical usage in a test:
-    >>> h_0, init_state, obs_list, sim_params, ref_noise_model, propagator = create_instances(tmp_path, test)
+    >>> h_0, init_state, obs_list, sim_params, ref_noise_model, propagator, loss = create_instances(test, tmp_path)
     >>> # After this call, `propagator` has been run and the constructed objects are available for assertions.
📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c9c8ce6 and b66ad6f.

📒 Files selected for processing (2)
  • docs/examples/characterizer_example.md (1 hunks)
  • tests/noise_char/test_characterizer.py (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
tests/noise_char/test_characterizer.py (6)
src/mqt/yaqs/core/data_structures/networks.py (3)
  • MPO (958-1379)
  • MPS (36-955)
  • init_ising (991-1044)
src/mqt/yaqs/core/data_structures/noise_model.py (1)
  • CompactNoiseModel (170-253)
src/mqt/yaqs/core/data_structures/simulation_parameters.py (2)
  • AnalogSimParams (132-277)
  • Observable (41-129)
src/mqt/yaqs/core/libraries/gate_library.py (3)
  • X (609-628)
  • Y (631-650)
  • Z (653-672)
src/mqt/yaqs/noise_char/optimization.py (1)
  • LossClass (84-412)
src/mqt/yaqs/noise_char/propagation.py (3)
  • PropagatorWithGradients (44-347)
  • set_observable_list (198-232)
  • run (234-347)
🪛 markdownlint-cli2 (0.18.1)
docs/examples/characterizer_example.md

117-117: Files should end with a single newline character

(MD047, single-trailing-newline)

🔇 Additional comments (5)
docs/examples/characterizer_example.md (1)

1-14: LGTM!

The documentation header clearly describes the example and provides helpful context about requirements and usage.

tests/noise_char/test_characterizer.py (4)

1-28: LGTM!

The copyright header and imports are well-organized and appropriate for the test module.


30-102: LGTM!

The Parameters class provides a clean container for test configuration with sensible defaults for lightweight testing.


159-196: LGTM!

The implementation correctly constructs all necessary test objects and properly initializes the propagator with a reference trajectory before returning.


199-218: LGTM!

The test properly verifies Characterizer initialization and smoke-tests the optimization workflow. Using max_iterations=1 is appropriate for a unit test focused on initialization and basic functionality.

Comment on lines 15 to 139
```python
import numpy as np
from pathlib import Path

from mqt.yaqs.noise_char.propagation import PropagatorWithGradients
from mqt.yaqs.noise_char.optimization import LossClass
from mqt.yaqs.noise_char.characterizer import Characterizer

from mqt.yaqs.core.data_structures.networks import MPO, MPS
from mqt.yaqs.core.data_structures.noise_model import CompactNoiseModel
from mqt.yaqs.core.data_structures.simulation_parameters import (
AnalogSimParams,
Observable,
)
from mqt.yaqs.core.libraries.gate_library import X, Y, Z, Create, Destroy


if __name__ == "__main__":
work_dir = "test/scikit_characterizer/"
work_dir_path = Path(work_dir)
work_dir_path.mkdir(parents=True, exist_ok=True)

# Defining Hamiltonian and observable list
L = 2
J = 1
g = 0.5

H_0 = MPO()
H_0.init_ising(L, J, g)

# Define the initial state
init_state = MPS(L, state="zeros")

obs_list = [Observable(X(), site) for site in range(L)]

noise_operator = "pauli_y"

# Defining simulation parameters
T = 3
dt = 0.1
N = 1000
max_bond_dim = 8
threshold = 1e-6
order = 1

sim_params = AnalogSimParams(
observables=obs_list,
elapsed_time=T,
dt=dt,
num_traj=N,
max_bond_dim=max_bond_dim,
threshold=threshold,
order=order,
sample_timesteps=True,
)

# Defining reference noise model and reference trajectory
gamma_reference = 0.1

ref_noise_model = CompactNoiseModel(
[
{
"name": noise_operator,
"sites": [i for i in range(L)],
"strength": gamma_reference,
}
]
)

# Write reference gammas to file
np.savetxt(
work_dir + "gammas.txt", ref_noise_model.strength_list, header="##", fmt="%.6f"
)

propagator = PropagatorWithGradients(
sim_params=sim_params,
hamiltonian=H_0,
compact_noise_model=ref_noise_model,
init_state=init_state,
)

propagator.set_observable_list(obs_list)

print("Computing reference trajectory ... ")
propagator.run(ref_noise_model)
ref_traj = propagator.obs_traj
print("Reference trajectory computed.")

# Optimizing the model
gamma_guess = 0.4

sim_params.num_traj = 300 # Reducing the number of trajectories for the optimization

guess_noise_model = CompactNoiseModel(
[
{
"name": noise_operator,
"sites": [i for i in range(L)],
"strength": gamma_guess,
}
]
)

opt_propagator = PropagatorWithGradients(
sim_params=sim_params,
hamiltonian=H_0,
compact_noise_model=guess_noise_model,
init_state=init_state,
)

loss = LossClass(
ref_traj=ref_traj,
traj_gradients=opt_propagator,
working_dir=work_dir,
print_to_file=True,
)

characterizer = Characterizer(
traj_gradients=opt_propagator, init_guess=guess_noise_model, loss=loss
)

print("Optimizing ... ")
characterizer.adam_optimize(max_iterations=50)
print("Optimization completed.")
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add trailing newline at end of file.

The example code is comprehensive and demonstrates the characterizer workflow correctly. However, the file should end with a newline character after the closing triple backticks for proper Markdown formatting.

Apply this diff:


<!-- suggestion_start -->

<details>
<summary>📝 Committable suggestion</summary>

> ‼️ **IMPORTANT**
> Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

```suggestion

🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

117-117: Files should end with a single newline character

(MD047, single-trailing-newline)

🤖 Prompt for AI Agents
In docs/examples/characterizer_example.md around lines 15 to 139, the file is
missing a trailing newline after the closing triple backticks; add a single
newline character at the end of the file (i.e., ensure the file ends with "\n"
immediately after the final ``` line) so the markdown ends with a proper
newline.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (4)
src/mqt/yaqs/noise_char/characterizer.py (3)

17-19: Import Path at runtime (not only under TYPE_CHECKING).

You use Path in the public signature and add runtime checks below; import it unconditionally. Also remove the TYPE_CHECKING import to avoid duplication.

 from __future__ import annotations
 
 import copy
-from typing import TYPE_CHECKING
+from typing import TYPE_CHECKING
+from pathlib import Path
@@
-if TYPE_CHECKING:
-    from pathlib import Path
+if TYPE_CHECKING:
     import numpy as np

166-179: Add hyperparameter validation and constrain restart_file to a trusted .pkl under work_dir.

Preflight checks prevent silent misconfigurations and avoid unsafe pickle loads from arbitrary paths. Also raise FileNotFoundError early to match the docstring.

     ) -> None:
         """Run an ADAM-based optimization to fit the noise model parameters.
@@
-        """
-        self.loss_history, self.x_history, self.x_avg_history, self.times, self.observable_traj = adam_optimizer(
+        """
+        # Basic validation
+        if alpha <= 0:
+            raise ValueError("alpha must be > 0.")
+        if max_iterations <= 0:
+            raise ValueError("max_iterations must be > 0.")
+        if threshold < 0:
+            raise ValueError("threshold must be >= 0.")
+        if max_n_convergence <= 0:
+            raise ValueError("max_n_convergence must be > 0.")
+        if tolerance < 0:
+            raise ValueError("tolerance must be >= 0.")
+        if not (0 < beta1 < 1):
+            raise ValueError("beta1 must be in (0, 1).")
+        if not (0 < beta2 < 1):
+            raise ValueError("beta2 must be in (0, 1).")
+        if epsilon <= 0:
+            raise ValueError("epsilon must be > 0.")
+
+        # Constrain restart file to the optimizer's working directory and .pkl suffix
+        if restart and restart_file is not None:
+            rf = Path(restart_file).expanduser().resolve()
+            wd = Path(self.loss.work_dir).expanduser().resolve()
+            try:
+                rf.relative_to(wd)
+            except ValueError as exc:
+                raise ValueError(
+                    f"restart_file must reside under the working directory {wd}."
+                ) from exc
+            if rf.suffix != ".pkl":
+                raise ValueError("restart_file must have .pkl suffix.")
+            if not rf.exists():
+                raise FileNotFoundError(f"restart_file not found: {rf}")
+            restart_file = rf
+
+        self.loss_history, self.x_history, self.x_avg_history, self.times, self.observable_traj = adam_optimizer(
             self.loss,
             self.init_x,
             alpha=alpha,
             max_iterations=max_iterations,
             threshold=threshold,
             max_n_convergence=max_n_convergence,
             tolerance=tolerance,
             beta1=beta1,
             beta2=beta2,
             epsilon=epsilon,
             restart=restart,
             restart_file=restart_file,
         )

181-181: Defensive guard for empty x_avg_history when reconstructing optimal_model.

Avoid IndexError when optimization exits before averaging; raise a clear error.

-        self.optimal_model = self.loss.x_to_noise_model(self.x_avg_history[-1])
+        if not self.x_avg_history:
+            raise RuntimeError("Optimization produced no averaged parameters; cannot reconstruct optimal model.")
+        self.optimal_model = self.loss.x_to_noise_model(self.x_avg_history[-1])
docs/examples/characterizer_example.md (1)

141-141: Add trailing newline at end of file.

Ensure the markdown ends with a single newline after the closing backticks.

-```
+```
+
📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b66ad6f and e58c188.

📒 Files selected for processing (2)
  • docs/examples/characterizer_example.md (1 hunks)
  • src/mqt/yaqs/noise_char/characterizer.py (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/mqt/yaqs/noise_char/characterizer.py (3)
src/mqt/yaqs/noise_char/optimization.py (3)
  • adam_optimizer (415-588)
  • LossClass (84-412)
  • x_to_noise_model (329-343)
src/mqt/yaqs/core/data_structures/noise_model.py (1)
  • CompactNoiseModel (170-253)
src/mqt/yaqs/noise_char/propagation.py (1)
  • PropagatorWithGradients (44-347)

Comment on lines +62 to +72
loss_history: list[float] | None = None

x_history: list[np.ndarray] | None = None

x_avg_history: list[np.ndarray] | None = None

times: np.ndarray | None = None

observable_traj: np.ndarray | None = None

optimal_model: CompactNoiseModel | None = None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Initialize optional fields in init for predictable state.

Set histories/outputs to known defaults in init to simplify consumers and type-checkers.

         self.loss = loss
 
         self.init_x = self.init_guess.strength_list.copy()
+
+        # Initialize optimization outputs
+        self.loss_history = []
+        self.x_history = []
+        self.x_avg_history = []
+        self.times = None
+        self.observable_traj = None
+        self.optimal_model = None

Also applies to: 96-105

🤖 Prompt for AI Agents
In src/mqt/yaqs/noise_char/characterizer.py around lines 62-72 (and similarly
lines 96-105), the class-level optional fields are declared but not consistently
initialized; move their initialization into __init__ and assign predictable
defaults: set loss_history = [] (use list() to avoid shared mutable defaults),
x_history = [], x_avg_history = [], times = None, observable_traj = None, and
optimal_model = None. Ensure these attributes are created on self in __init__
with the same types as the annotations so consumers and type-checkers see
concrete, instance-specific defaults.

Comment on lines +120 to +165
"""Run an ADAM-based optimization to fit the noise model parameters.
This method runs the external `adam_optimizer` routine to minimize the
callable `self.loss` starting from `self.init_x`. On completion (or
early termination), several instance attributes are updated with the
optimizer history and the recovered optimal noise model.
Parameters.
----------
alpha (float, optional): Learning rate for Adam optimizer. Default is 0.05.
max_iterations (int, optional): Maximum number of optimization iterations. Default is 1000.
threshold (float, optional): Threshold for parameter convergence check. Default is 5e-4.
max_n_convergence (int, optional): Number of consecutive iterations to check for convergence. Default is 50.
tolerance (float, optional): Absolute loss tolerance for early stopping. Default is 1e-8.
beta1 (float, optional): Exponential decay rate for the first moment estimates. Default is 0.5.
beta2 (float, optional): Exponential decay rate for the second moment estimates. Default is 0.999.
epsilon (float, optional): Small constant for numerical stability. Default is 1e-8.
restart (bool, optional): Whether to restart optimization from a checkpoint. Default is False.
restart_file (str, optional): Path to a specific checkpoint file to restart from.
If None, the latest checkpoint in the working directory is used.
Notes:
-----
- The method delegates the numerical optimization to the external
`adam_optimizer` function and forwards the provided hyperparameters to it.
- Convergence and restart semantics depend on the `adam_optimizer` implementation.
- After optimization, the final noise model is constructed by calling
`self.loss.x_to_noise_model` with the final averaged parameters.
Raises:
------
ValueError
If one or more numeric hyperparameters are out of valid ranges (e.g.,
non-positive learning rate or non-positive max_iterations).
FileNotFoundError
If `restart` is True and `restart_file` is provided but does not exist.
RuntimeError
If the underlying optimizer reports a failure to converge when such
failures are surfaced as exceptions.
Examples:
--------
>>> # run with default hyperparameters
>>> obj.adam_optimize()
>>> # run with a smaller learning rate and more iterations
>>> obj.adam_optimize(alpha=0.01, max_iterations=1000)
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Docstring accuracy: defaults and types.

  • max_iterations default is 100 (not 1000).
  • restart_file type should be Path | None (not str).
  • Keep “FileNotFoundError” since we’ll raise it preflight below.
-        max_iterations (int, optional): Maximum number of optimization iterations. Default is 1000.
+        max_iterations (int, optional): Maximum number of optimization iterations. Default is 100.
@@
-        restart_file (str, optional): Path to a specific checkpoint file to restart from.
+        restart_file (Path | None, optional): Path to a specific checkpoint file to restart from.
🤖 Prompt for AI Agents
In src/mqt/yaqs/noise_char/characterizer.py around lines 120 to 165, the
docstring's default/type info is incorrect: change the documented default for
max_iterations from 1000 to 100, and change the restart_file parameter type from
"str or None" to "Path | None" (or pathlib.Path | None depending on project
style); keep the FileNotFoundError entry as-is. Update the parameter list and
the Raises section accordingly so the docstring and signatures/types match the
actual code.

@aaronleesander aaronleesander self-requested a review October 29, 2025 10:42
@aaronleesander aaronleesander added feature New feature or request major Major version update labels Oct 29, 2025
@aaronleesander aaronleesander changed the title Characterizer added noise characterization module Oct 29, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
src/mqt/yaqs/core/libraries/gate_library.py (1)

1758-1763: Add a short note explaining these aliases.

Matches physics conventions (“raising/creation”, “lowering/annihilation”, Pauli synonyms). A one‑liner in the GateLibrary docstring prevents confusion and helps discovery.

Example insertion under the GateLibrary docstring:

+    Notes:
+        Aliases are provided for naming convenience and literature compatibility:
+        raising/create → Create, lowering/destroy → Destroy, pauli_x/x → X,
+        pauli_y/y → Y, pauli_z/z → Z. Aliases and canonical names are equivalent.
📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e58c188 and 3de2c7d.

📒 Files selected for processing (1)
  • src/mqt/yaqs/core/libraries/gate_library.py (3 hunks)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: 🐍 Test (macos-14) / 🐍 macos-14
  • GitHub Check: 🐍 Test (ubuntu-24.04-arm) / 🐍 ubuntu-24.04-arm
  • GitHub Check: 🐍 Test (ubuntu-24.04) / 🐍 ubuntu-24.04
  • GitHub Check: 🐍 Test (macos-15-intel) / 🐍 macos-15-intel
  • GitHub Check: 🐍 Test (windows-2022) / 🐍 windows-2022
🔇 Additional comments (1)
src/mqt/yaqs/core/libraries/gate_library.py (1)

354-362: Factory method addition looks good.

Return typing resolves via postponed annotations; consistent with other factories.

Comment on lines +775 to +795
class Zero(BaseGate):
"""Class representing the Zero gate.
Attributes:
name: The name of the gate ("zero").
matrix: The 2x2 zero matrix.
interaction: The interaction level (1 for single-qubit gates).
tensor: The tensor representation of the gate (same as the matrix).
Methods:
set_sites(*sites: int) -> None:
Sets the site(s) where the gate is applied.
"""

name = "zero"

def __init__(self) -> None:
"""Initializes the Zero gate."""
mat = np.array([[0, 0], [0, 0]])
super().__init__(mat)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use complex dtype for Zero to avoid implicit casts.

Zero currently builds an int array; downstream ops expect complex. Make it explicit.

Apply this diff:

 class Zero(BaseGate):
@@
     def __init__(self) -> None:
         """Initializes the Zero gate."""
-        mat = np.array([[0, 0], [0, 0]])
+        mat = np.zeros((2, 2), dtype=np.complex128)
         super().__init__(mat)
🤖 Prompt for AI Agents
In src/mqt/yaqs/core/libraries/gate_library.py around lines 775 to 795, the Zero
gate builds an integer numpy array which causes implicit casts later; change the
matrix creation to explicitly use a complex dtype (e.g. dtype=np.complex128)
when constructing the 2x2 zero array so all downstream operations see a
complex-valued matrix.

schmidt_spectrum = SchmidtSpectrum

custom = BaseGate
zero = Zero
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Document zero in GateLibrary attributes.

Expose it in the class docstring to complete the public API surface.

Suggested docstring tweak (apply near the GateLibrary “Attributes” block):

         id: Class for the identity gate.
+        zero: Class for the zero operator (2×2 all‑zero matrix).

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In src/mqt/yaqs/core/libraries/gate_library.py around line 1756, the class
GateLibrary exposes an attribute named `zero = Zero` but it isn't documented;
update the GateLibrary class docstring (within the "Attributes" section) to add
a short entry for `zero` that states its name, type (Zero), and a concise
description of its purpose/semantics in the API (e.g., represents the zero
qubit/gate placeholder or default zero-state/gate depending on library
semantics), so the attribute is part of the public API documentation.

@denialhaag
Copy link
Member

FYI: The "major" label would lead to the next release being v1.0.0. Likely, this should at most be part of a minor release. Please correct me here, @aaronleesander.

@aaronleesander
Copy link
Member

FYI: The "major" label would lead to the next release being v1.0.0. Likely, this should at most be part of a minor release. Please correct me here, @aaronleesander.

Yo @denialhaag, so I was using it to signify the smaller "major" releases like 0.3 --> 0.4 right now. Each of those aligns more or less with a paper-level feature (so each 0.1 to 0.2 corresponds to new research), and this one adds a new module which helps with new projects.

Do you have any opinions on what we should signify as major or minor then? This would be good to know for the future so we can stay consistent especially as more people contribute.

@burgholzer
Copy link
Member

FYI: The "major" label would lead to the next release being v1.0.0. Likely, this should at most be part of a minor release. Please correct me here, @aaronleesander.

Yo @denialhaag, so I was using it to signify the smaller "major" releases like 0.3 --> 0.4 right now. Each of those aligns more or less with a paper-level feature (so each 0.1 to 0.2 corresponds to new research), and this one adds a new module which helps with new projects.

Do you have any opinions on what we should signify as major or minor then? This would be good to know for the future so we can stay consistent especially as more people contribute.

0.3 -> 0.4 is a "minor" version bumped according to semantic versioning. So the PR should be tagged as such.
Generally, we hardly ever use "major", except for really big (and breaking) releases/changes.
Anything that does not deserve a minor bump should simply not have a tag at all.
Multiple PRs of a release can have a minor label and this will still only bump by 1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

feature New feature or request major Major version update

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants