-
-
Notifications
You must be signed in to change notification settings - Fork 9
Gradient-Free Noise Characterization #288
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Gradient-Free Noise Characterization #288
Conversation
force the parameters to be positives
force the parameters to be positive
…se_Characterization
…qt-yaqs into Noise_Characterization
d_On_d_gk has shape (n_jump_site, n_obs_site, L, n_t) loss_function changed accordingly
…ent_free_optimization
…ent_free_optimization
📝 WalkthroughSummary by CodeRabbitRelease Notes
✏️ Tip: You can customize this high-level summary in your review settings. WalkthroughA new noise-characterization subsystem has been introduced alongside supporting core updates. The system provides a Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Characterizer
participant Propagator
participant LossClass
participant Optimizer
participant GateLibrary
User->>Characterizer: initialize(propagator, init_guess, loss)
activate Characterizer
User->>Characterizer: adam_optimize(max_iter=N)
activate Characterizer
Characterizer->>LossClass: init_x (flat parameter vector)
LossClass->>LossClass: store reference trajectory
loop Iteration 1 to N
Characterizer->>Optimizer: call adam_opt(loss, x, ...)
activate Optimizer
Optimizer->>LossClass: __call__(x_candidate)
activate LossClass
LossClass->>LossClass: x_to_noise_model(x)
LossClass->>Propagator: run(noise_model)
activate Propagator
Propagator->>GateLibrary: lookup gates for noise processes
Propagator->>Propagator: execute MPS Lindblad propagation
Propagator-->>LossClass: obs_traj, times, sim_time
deactivate Propagator
LossClass->>LossClass: compute loss vs reference
LossClass->>LossClass: optional numeric gradient
LossClass-->>Optimizer: (loss, grad, sim_time)
deactivate LossClass
Optimizer->>Optimizer: ADAM update (m, v, momentum)
Optimizer->>Optimizer: optional bounds clipping
Optimizer->>LossClass: track history
deactivate Optimizer
end
Characterizer->>LossClass: final x
LossClass->>LossClass: x_to_noise_model(x_final)
LossClass-->>Characterizer: optimal_model
Characterizer-->>User: optimization complete
deactivate Characterizer
Estimated code review effort🎯 4 (Complex) | ⏱️ ~65 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 42
🤖 Fix all issues with AI agents
In @docs/examples/characterizer_example.md:
- Around line 138-140: Add a short clarifying note before the example's
optimization block stating that adam_optimize is a gradient-based method and
provide the gradient-free alternatives to use (cma_optimize, mcmc_optimize,
bayesian_optimize) with the same max_iterations argument; reference the existing
call to characterizer.adam_optimize(max_iterations=50) and suggest replacing or
showing alternatives like characterizer.cma_optimize(max_iterations=50),
characterizer.mcmc_optimize(max_iterations=50), or
characterizer.bayesian_optimize(max_iterations=50) so readers understand the
distinction.
- Around line 33-36: The code mixes a string path variable work_dir with a Path
object work_dir_path; replace all subsequent uses of the string (notably the
usage around the original Line 86) to use work_dir_path for consistency and
cross-platform correctness, and where an API requires a string pass
work_dir_path.as_posix() or str(work_dir_path); update any file/directory
operations, joins, or reads/writes to reference work_dir_path (or its string
form when necessary).
In @src/mqt/yaqs/core/data_structures/noise_model.py:
- Around line 236-249: The loop over self.compact_processes should validate
required keys before touching GateLibrary or instantiating gates: first assert
"name", "sites", and "strength" are present on proc, then check GateLibrary
membership for proc["name"], then retrieve the gate class once (e.g., cls =
getattr(GateLibrary, name)) and assert cls().interaction == 1 before expanding
sites into self.expanded_processes and appending to self.index_list; reorder the
checks in the loop to perform key presence checks first to fail fast and avoid
creating gate instances when keys are missing.
In @src/mqt/yaqs/noise_char/__init__.py:
- Line 8: The module docstring is too narrow—update the top-level docstring in
src/mqt/yaqs/noise_char/__init__.py to describe the full noise characterization
subsystem (not just "optimization routines"): mention that the module exports
propagators, loss evaluators, the high-level Characterizer orchestrator, and
multiple optimization strategies (gradient-based and gradient-free), and briefly
state their roles so readers know the available APIs and responsibilities.
In @src/mqt/yaqs/noise_char/characterizer.py:
- Around line 77-108: The constructor docstring refers to
"PropagatorWithGradients" but the parameter type hint is Propagator; update the
docstring for __init__ to describe the propagator parameter as Propagator (or
match whatever concrete type you intend) so the text and the type hint for the
propagator parameter are consistent; modify the docstring line that starts
"propagator : PropagatorWithGradients" to read "propagator : Propagator" and
keep the rest of the description about producing trajectories and gradients
intact.
- Around line 189-210: The code assumes adam_opt returned a non-empty
x_avg_history and directly does self.optimal_model =
self.loss.x_to_noise_model(self.x_avg_history[-1]); add a defensive check after
the adam_opt call to handle an empty self.x_avg_history: if it's empty, either
set self.optimal_model to a safe default (e.g. transform self.init_x via
self.loss.x_to_noise_model or set None) or raise a clear RuntimeError with
context; ensure this uses the same symbols (self.x_avg_history, adam_opt,
self.optimal_model, self.loss.x_to_noise_model, self.init_x) so callers get a
deterministic outcome instead of an IndexError.
In @src/mqt/yaqs/noise_char/loss.py:
- Line 46: The class-level variable n_avg = 20 is shadowed by self.n_avg = 100
in __init__, causing confusion; either remove the class attribute or explain the
rationale. Fix by deleting the class attribute n_avg = 20 from the class
definition if the instance default is always 100, or add a clear comment above
the class-level n_avg explaining its purpose and why __init__ overrides it
(referencing the class variable n_avg and the constructor assignment self.n_avg
in __init__).
- Line 104: The call to set_observable_list currently passes the original
parameter ref_traj instead of the deep-copied attribute, causing potential
external mutation inconsistencies; update the call to use the deep-copied
trajectory stored on the instance (self.ref_traj) so that
self.propagator.set_observable_list(self.ref_traj) uses the internal copy
consistently (look for the set_observable_list invocation in the same scope as
where self.ref_traj is assigned).
- Around line 222-250: The set_history method can leave the object in an
inconsistent state because it does not validate that x_history, f_history,
x_avg_history, and diff_avg_history have the same length; add a length-check at
the start of set_history (in the set_history method) that computes expected_len
= len(x_history) and asserts or raises a ValueError if any of len(f_history),
len(x_avg_history), or len(diff_avg_history) differ, and only then set
self.n_eval and populate self.x_history, self.f_history, self.x_avg_history, and
self.diff_avg_history (keep existing copies/conversions). Ensure the error
message names the mismatched input(s) for easier debugging.
- Around line 130-144: compute_avg currently slices x_history as
self.x_history[self.n_avg :] when len(self.x_history) > self.n_avg, which takes
items from index n_avg to end instead of the last n_avg items; change the slice
to use the tail of the list (e.g., self.x_history[-self.n_avg:]) so that when
len(self.x_history) > self.n_avg you compute the mean over the final n_avg
entries, keeping the existing behavior of using the entire history when len <=
n_avg and appending the resulting x_avg.copy() to x_avg_history.
- Around line 349-370: The __call__ method's docstring lists a fourth return
value (avg_min_max_traj_time) that doesn't exist in the declared return type
tuple[float, np.ndarray, float]; update the Returns section of the docstring to
describe only the three actual returned values (f: float, grad: np.ndarray,
sim_time: float) and remove the incorrect fourth item so the docstring matches
the method signature and actual behavior.
- Around line 208-220: The reset method currently clears most optimizer history
but omits resetting self.grad_history and self.converged; update the reset
method (def reset) to also set self.grad_history = [] and restore self.converged
to its initial state (False) so no stale gradient history or convergence flag
remains when the optimizer is reused.
- Line 383: Replace the inefficient deep copy of a NumPy array by using the
array's native copy: change the assignment that uses
copy.deepcopy(self.propagator.obs_array) to use self.propagator.obs_array.copy()
so self.obs_array = self.propagator.obs_array.copy(); also remove the unused
import of copy if nothing else in the module uses it.
- Line 414: Replace the inefficient zero-array construction "grad = np.array([0]
* self.d)" with a direct zeros call: use np.zeros(self.d) (optionally specifying
dtype if other code expects a specific numeric type) to create the gradient
buffer; update the line that assigns to grad in the surrounding function/method
where self.d is defined so it uses np.zeros for performance and clarity.
- Around line 393-408: The numeric gradient currently uses forward differences
giving O(epsilon) error; change the loop in the return_numeric_gradients branch
to compute central differences by evaluating loss at x_plus and x_minus for each
index: create x_minus via x.copy() subtracting self.epsilon, call
self.x_to_noise_model(x_minus) and self.propagator.run(...) to get loss_minus
from obs_array (like you do for loss_plus), then set grad[i] = (loss_plus -
loss_minus) / (2 * self.epsilon); keep existing use of self.x_to_noise_model,
self.propagator.run, and self.propagator.obs_array, and ensure you deep-copy obs
arrays as done now.
- Around line 307-331: The docstring for write_traj is misleading: replace
references to exp_vals_traj, Propagator-specific filename patterns, work_dir,
n_eval, and x0/y0/z labels with a concise description that matches the
implementation: write_traj(self, obs_array: np.ndarray, output_file: Path)
expects obs_array shaped (n_obs, n_t), prepends self.t as the first row, builds
a header "t obs_0 obs_1 ..." from n_obs, and writes the transposed array to
the provided output_file using np.savetxt with fmt="%.6f"; mention the
parameters obs_array and output_file and the returned None behavior.
- Around line 28-40: The function name lineal_function_1000 is misspelled and
misleading because it returns a constant; rename it to constant_function_1000
(or linear_function_1000 if you change implementation to be linear) and update
all references, including the default parameter in the class __init__ that
currently points to lineal_function_1000; ensure the function signature remains
(i: int) -> int and behavior unchanged if choosing constant_function_1000, and
run tests to catch any missed references.
In @src/mqt/yaqs/noise_char/optimization_algorithms/gradient_based/adam.py:
- Line 18: The trailing "# noqa: S403" on the import statement is unnecessary;
edit the line that imports pickle (the "import pickle" statement) to remove the
" # noqa: S403" comment so it reads simply "import pickle". Ensure no other uses
of that specific noqa token exist in the same module.
- Around line 163-165: Current clipping only runs when both x_low and x_up are
set, preventing one-sided bounds; update the clipping in the Adam update (the
block using variables x, x_low, x_up in
src/mqt/yaqs/noise_char/optimization_algorithms/gradient_based/adam.py) to apply
each bound independently: if x_low is not None enforce the lower bound on x, and
if x_up is not None enforce the upper bound on x (use np.maximum for lower,
np.minimum for upper or equivalent) so partial bounds are respected.
- Around line 84-86: The docstring wrongly claims "All parameter values are
clipped to the [0, 1] range"; update the docstring for the Adam optimizer
(symbol "Adam" in gradient_based/adam.py) to state that parameter clipping uses
the provided bounds x_low and x_up (i.e., parameters are clipped to [x_low,
x_up] when those bounds are supplied; if no bounds are provided it defaults to
the original behavior), and keep the rest of the docstring details
(checkpointing to f.work_dir and logging to 'performance_metric_sec.txt')
intact.
In @src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/__init__.py:
- Around line 1-8: The package __init__.py lacks an explicit public API and
doesn't re-export the optimization submodules; update the module docstring to
mention available algorithms and add an __all__ list (e.g., "__all__ =
['bayesian', 'cma', 'mcmc']") and, if desired for convenience, add explicit
relative imports (from . import bayesian, cma, mcmc) so users can access those
modules via the package namespace; ensure names in __all__ match the imported
symbols.
In @src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/bayesian.py:
- Around line 86-122: The function bayesian_opt uses x_low and x_up without
validating they are provided: add explicit input validation at the start of
bayesian_opt to raise a clear error if either x_low or x_up is None (or
coerce/derive bounds if intended), and ensure bounds is constructed only after
validation; also update the docstring to match the actual default acq_name="UCB"
(or change the function default to "EI" if that is the intended behavior) so the
docstring and function signature are consistent (refer to symbols bayesian_opt,
x_low, x_up, acq_name, and the bounds construction).
- Around line 39-81: The docstring for get_acquisition_function incorrectly
lists "EI" as supported while the function only handles "LEI", "PI", and "UCB";
fix by either removing "EI"/Expected Improvement from the docstring or add an
"EI" branch that returns the ExpectedImprovement acquisition (e.g., return
ExpectedImprovement(model=model, best_f=best_f, maximize=True)) alongside the
existing LogExpectedImprovement, ProbabilityOfImprovement, and
UpperConfidenceBound handlers so the docs and implementation stay consistent.
In @src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/cma.py:
- Line 21: Remove the leftover interactive cell marker by deleting the
standalone line containing "# %%" from the top of the cma.py module so the file
contains only production code and no notebook markers.
- Around line 95-103: The loop contains duplicated and unsafe convergence
checks: remove the redundant direct access to f.converged and keep a single
protected check; specifically, delete the final unconditional "if f.converged:
break" and rely on the earlier guarded check "if hasattr(f, 'converged') and
getattr(f, 'converged', False): break" alongside the existing es.stop() check so
the code no longer risks AttributeError when f lacks a converged attribute.
In @src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/mcmc.py:
- Around line 103-108: The acceptance probability computation can overflow when
temperature becomes extremely small; in the MCMC loop compute a floored
temperature (e.g., temperature = max(temperature, min_temperature)) before
computing acceptance_prob = np.exp(-delta / temperature) and/or clamp
acceptance_prob to [0,1]; add a new constant or parameter like min_temperature
(e.g., 1e-8 or 1e-6) and use it wherever temperature is used for the
Metropolis-Hastings acceptance in this file (variables/identifiers: temperature,
delta, acceptance_prob, rng.random(), x, fx, f_new) so annealing (via
anneal_rate) cannot drive temperature to zero and cause overflow.
In @src/mqt/yaqs/noise_char/propagation.py:
- Around line 308-315: The current validation loops over
noise_model.compact_processes and indexes into
self.compact_noise_model.compact_processes by i, which can raise IndexError or
fail to detect extra processes; first check that
len(noise_model.compact_processes) ==
len(self.compact_noise_model.compact_processes) and raise a clear ValueError if
they differ, then either iterate with zip(noise_model.compact_processes,
self.compact_noise_model.compact_processes) or keep the existing enumerate but
only index into the other sequence after confirming equal length; keep the
existing site/name comparison (proc["name"], proc["sites"]) and raise the same
ValueError message on mismatch.
- Around line 221-233: The code uses max(all_obs_sites) which will raise if
obs_list is empty; update propagation.py around the obs_list handling (variables
obs_list, all_obs_sites, self.sites, self.n_obs, set_observables) to explicitly
handle the empty case: compute all_obs_sites, then if not all_obs_sites skip the
max check (or raise a clear ValueError with a descriptive message), otherwise
perform if max(all_obs_sites) >= self.sites and raise the existing ValueError;
ensure self.n_obs is set (0 for empty) and set_observables is still assigned
consistently.
- Around line 38-42: The loop over noise_model.processes uses
getattr(GateLibrary, proc["name"]) which will raise a raw AttributeError for
unknown names; wrap this in a try/except AttributeError around the getattr call
inside the loop and raise a clearer exception (e.g., ValueError) that includes
the bad proc["name"] (and optionally the process index or list of valid names
via dir(GateLibrary)) so callers see an actionable message; keep adding
Observable(gate(), site) for valid gates to noise_list as before.
- Around line 193-197: The max() call over the generator for
expanded_noise_model.processes can raise ValueError when the sequence is empty;
update the check around self.sites to handle an empty processes list by either
using max(proc["sites"][0] for proc in self.expanded_noise_model.processes,
default=-1) or first guarding with an if not
self.expanded_noise_model.processes: ...; then perform the comparison (using the
default -1 will skip the raise) and keep the existing ValueError with the same
message when an out-of-range site index is detected.
- Line 8: Fix the module docstring typo: change "trahectories" to "trajectories"
in the top-level docstring that currently reads 'Performs the simulation of the
Ising model and returns expectations values and A_kn trahectories.' so it
correctly reads '...A_kn trajectories.'.
In @tests/core/data_structures/test_noise_model.py:
- Around line 97-127: The test function test_compact_noise_model_assertion and
its docstring claim to validate CompactNoiseModel but the assertions instantiate
NoiseModel; update the test to be consistent by replacing NoiseModel with
CompactNoiseModel in the three pytest.raises blocks (or alternatively rename the
function/docstring to reference NoiseModel if that was intended) so the test
actually validates CompactNoiseModel's missing-field assertions; refer to the
function name test_compact_noise_model_assertion and the classes NoiseModel and
CompactNoiseModel when making the change.
In @tests/noise_char/test_bayesian.py:
- Around line 190-225: The test_bayesian_opt_convergence test currently only
checks result types and doesn't assert that early stopping occurred; update the
assertions after calling bayesian_opt to verify convergence/early termination by
asserting loss.n_eval < (n_init + max_iter) (or assert loss.n_eval == 3 given
the test sets converged after the 3rd eval), and optionally assert that the
returned best_x is one of the evaluated points in _x_train and best_y matches a
value in _y_train to strengthen correctness checks.
In @tests/noise_char/test_characterizer.py:
- Around line 153-157: The docstring example calls create_instances(tmp_path,
test) but the actual function signature is create_instances(test, tmp_path);
update the example invocation to create_instances(test, tmp_path) (and any
accompanying comment text) so the example argument order matches the
create_instances function.
- Around line 132-135: The Returns docstring is missing the seventh element
`loss` that the function actually returns (the type annotation indicates a tuple
including LossClass); update the Returns section to describe the full 7-tuple by
adding an entry for `loss` (its type, e.g., LossClass or Loss) and a short
description of what it represents alongside the existing entries for MPO, MPS,
Observable list, AnalogSimParams, CompactNoiseModel, and Propagator so the
docstring matches the annotated return type.
In @tests/noise_char/test_mcmc.py:
- Around line 82-91: The helper function make_loss is duplicated across tests;
extract it into a shared test utility (e.g., tests/conftest.py or
tests/utils.py) and replace local copies with an import or fixture. Move the def
make_loss(obj: object) -> LossClass implementation into the shared module,
export it, then update the tests (both test_mcmc.py and test_cma.py) to import
make_loss from that module (or register it as a pytest fixture) and remove the
duplicated definitions in each test file.
In @tests/noise_char/test_optimization.py:
- Around line 445-455: The manual cleanup loop using
loss_function.work_dir.glob("opt_traj*.txt"), ("restart*.pkl"), and
("performance*.txt") is redundant with pytest's tmp_path fixture; remove these
three cleanup blocks and ensure the test uses a tmp_path (or tmp_path_factory)
for loss_function.work_dir so pytest auto-cleans between tests, or if mid-test
removal is actually required keep only the specific unlink calls and add a short
comment explaining why manual cleanup is necessary.
- Line 13: Remove the unused inline noqa by editing the import of pickle in
tests/noise_char/test_optimization.py: change the line importing pickle (the
symbol "pickle") to not include the trailing "# noqa: S403" since the S403 rule
isn't enabled and the directive is unnecessary; simply keep the import statement
without the noqa comment.
In @tests/noise_char/test_propagation.py:
- Around line 24-74: The Parameters class is duplicated across tests; move it to
a shared test fixture by extracting the class (Parameters) into
tests/conftest.py (or a test utilities module) and import or provide it as a
fixture in both test_propagation.py and test_optimization.py; update the tests
to either import Parameters from the shared module or declare a fixture named
parameters that returns a Parameters() instance so both test files use the
single shared definition and eliminate duplication.
- Around line 159-179: Rename the test function and update its docstring to
reflect the current API: change the test function name
test_propagatorwithgradients_runs to something like test_propagator_runs and
update the docstring references from propagation.tjm_traj to
propagation.Propagator (or the Propagator.run behavior) so the name and
description match the code that uses create_propagator_instance and
propagator.run; ensure any mentions of gradients or tjm_traj are removed or
revised to describe propagator.times and propagator.obs_array assertions.
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (24)
docs/examples/characterizer_example.mdpyproject.tomlsrc/mqt/yaqs/core/data_structures/noise_model.pysrc/mqt/yaqs/core/libraries/gate_library.pysrc/mqt/yaqs/noise_char/__init__.pysrc/mqt/yaqs/noise_char/characterizer.pysrc/mqt/yaqs/noise_char/loss.pysrc/mqt/yaqs/noise_char/optimization_algorithms/__init__.pysrc/mqt/yaqs/noise_char/optimization_algorithms/gradient_based/__init__.pysrc/mqt/yaqs/noise_char/optimization_algorithms/gradient_based/adam.pysrc/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/__init__.pysrc/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/bayesian.pysrc/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/cma.pysrc/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/mcmc.pysrc/mqt/yaqs/noise_char/propagation.pytests/core/data_structures/test_noise_model.pytests/core/libraries/test_gate_library.pytests/noise_char/__init__.pytests/noise_char/test_bayesian.pytests/noise_char/test_characterizer.pytests/noise_char/test_cma.pytests/noise_char/test_mcmc.pytests/noise_char/test_optimization.pytests/noise_char/test_propagation.py
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-11-04T14:26:25.420Z
Learnt from: marcelwa
Repo: munich-quantum-toolkit/core PR: 1243
File: test/python/qdmi/qiskit/conftest.py:11-19
Timestamp: 2025-11-04T14:26:25.420Z
Learning: In the munich-quantum-toolkit/core repository, Qiskit is always available as a dependency during testing, so import guards for qiskit-dependent imports in test files (e.g., test/python/qdmi/qiskit/*.py) are not necessary.
Applied to files:
pyproject.toml
🧬 Code graph analysis (7)
src/mqt/yaqs/noise_char/characterizer.py (3)
src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/mcmc.py (1)
mcmc_opt(25-128)src/mqt/yaqs/noise_char/loss.py (2)
LossClass(43-418)x_to_noise_model(333-347)src/mqt/yaqs/noise_char/propagation.py (1)
Propagator(45-335)
tests/noise_char/test_cma.py (2)
src/mqt/yaqs/noise_char/loss.py (1)
LossClass(43-418)tests/noise_char/test_mcmc.py (1)
make_loss(82-91)
tests/noise_char/test_bayesian.py (1)
src/mqt/yaqs/noise_char/loss.py (1)
LossClass(43-418)
tests/noise_char/test_propagation.py (3)
src/mqt/yaqs/core/data_structures/networks.py (3)
MPO(956-1377)MPS(36-953)init_ising(989-1042)src/mqt/yaqs/core/data_structures/simulation_parameters.py (2)
AnalogSimParams(132-277)Observable(41-129)src/mqt/yaqs/noise_char/propagation.py (3)
Propagator(45-335)set_observable_list(199-233)run(261-335)
src/mqt/yaqs/noise_char/loss.py (2)
src/mqt/yaqs/core/data_structures/simulation_parameters.py (1)
Observable(41-129)src/mqt/yaqs/noise_char/propagation.py (4)
Propagator(45-335)set_observable_list(199-233)write_traj(235-259)run(261-335)
src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/mcmc.py (1)
src/mqt/yaqs/noise_char/loss.py (1)
LossClass(43-418)
tests/noise_char/test_mcmc.py (3)
src/mqt/yaqs/noise_char/loss.py (1)
LossClass(43-418)tests/noise_char/test_cma.py (1)
make_loss(106-115)src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/mcmc.py (1)
mcmc_opt(25-128)
🪛 Ruff (0.14.10)
tests/noise_char/test_optimization.py
13-13: Unused noqa directive (non-enabled: S403)
Remove unused noqa directive
(RUF100)
src/mqt/yaqs/noise_char/optimization_algorithms/gradient_based/adam.py
18-18: Unused noqa directive (non-enabled: S403)
Remove unused noqa directive
(RUF100)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: 🐍 Test (windows-2022) / 🐍 windows-2022
- GitHub Check: 🐍 Test (ubuntu-24.04) / 🐍 ubuntu-24.04
- GitHub Check: 🐍 Test (ubuntu-24.04-arm) / 🐍 ubuntu-24.04-arm
🔇 Additional comments (39)
tests/noise_char/__init__.py (1)
1-8: Looks good.Standard test package initializer with proper licensing and a clear docstring.
src/mqt/yaqs/noise_char/optimization_algorithms/gradient_based/__init__.py (1)
1-8: Looks good.Standard package initializer with proper licensing and an accurate docstring describing the gradient-based optimization routines.
src/mqt/yaqs/noise_char/optimization_algorithms/__init__.py (1)
1-8: LGTM!The package initializer follows project conventions with proper license headers and a clear module docstring.
tests/core/libraries/test_gate_library.py (1)
188-200: LGTM!The test follows the same pattern as other gate tests in this file and provides adequate coverage for the Zero gate, including factory method consistency between
GateLibraryandBaseGate.src/mqt/yaqs/core/libraries/gate_library.py (3)
354-361: LGTM!The
zero()classmethod follows the established pattern for other gate factory methods inBaseGate.
775-795: LGTM!The
Zerogate class is well-implemented with proper documentation and follows the same structure as other gate classes (X, Y, Z, Id, etc.). The 2×2 zero matrix[[0,0],[0,0]]is correct.
1756-1763: LGTM! Clear aliases for noise model compatibility.The added aliases provide convenient alternative names that align with common quantum mechanics and noise modeling conventions:
raising/loweringare standard terms for creation/annihilation operatorspauli_x/pauli_y/pauli_zmake the Pauli nature explicitThe comment clearly explains the rationale for these aliases.
tests/core/data_structures/test_noise_model.py (1)
59-95: LGTM! Thorough test coverage for CompactNoiseModel creation and expansion.The test validates compact process storage, expansion to per-site entries, strength propagation, and index mapping. The assertions comprehensively cover the expected behavior.
tests/noise_char/test_mcmc.py (4)
27-70: LGTM! Well-designed deterministic RNG for reproducible MCMC tests.The
DummyRngclass provides controlled randomness with clear semantics for both normal distribution samples and acceptance decisions. The index clamping ensures stability when sequences are exhausted.
94-120: LGTM! Test correctly validates best-state tracking.The test verifies that the MCMC optimizer accepts an improved proposal (moving from
[1,1]to[0,0]) and properly tracks the global best with expected evaluation count.
123-151: LGTM! Early stopping behavior correctly validated.The test confirms that the optimizer stops after
patienceiterations without improvement. The expected call count ofpatience + 1(initial + iterations) is correct.
154-180: LGTM! Bounds clipping correctly tested.The test verifies that out-of-bounds proposals
[5.0, -5.0]are clipped to bounds[-1, 1]before evaluation, resulting in the expected[1.0, -1.0].src/mqt/yaqs/core/data_structures/noise_model.py (2)
18-24: LGTM! Appropriate imports for the new functionality.The
copymodule enables deep copying to prevent external mutation, andGateLibraryimport is necessary for gate validation inCompactNoiseModel.
170-230: Well-documented class with clear behavior specification.The docstring thoroughly documents parameters, behavior, attributes, and error conditions. The validation order and expansion logic are clearly explained.
tests/noise_char/test_cma.py (4)
28-82: LGTM! Well-designed mock strategy for isolated CMA testing.The
DummyStrategyclass accurately mimics theCMAEvolutionStrategyinterface, tracks calls appropriately, and provides controllable behavior viastop_after_first. The result tracking intell()correctly identifies the best solution.
118-132: LGTM! Best solution selection correctly validated.The test verifies that
cma_optreturns the solution with the lowest objective value from the candidates. With candidates[1.0, 2.0](sum=5) and[-1.0, 0.5](sum=1.25), the expected best is[-1.0, 0.5]with value 1.25.
135-156: LGTM! Convergence detection test validates early stopping.The test confirms that setting
objective.converged = Trueafter 2 calls triggers early termination, with only 1tell()call made.
159-174: LGTM! Bounds propagation correctly tested.The test verifies that bounds are correctly converted to list form and passed to the CMA strategy options.
tests/noise_char/test_optimization.py (5)
29-101: LGTM! Comprehensive parameter container for test setup.The
Parametersclass provides a well-documented, centralized configuration for test scenarios with sensible defaults for lightweight testing.
103-162: LGTM! Well-structured test helper for LossClass instantiation.The helper correctly constructs all necessary components (Hamiltonian, initial state, observables, simulation parameters, noise model, propagator) and returns a configured
LossClassinstance ready for testing.
182-209: LGTM! History management and reset functionality properly tested.The test validates that
reset()clears state andset_history()correctly populates history attributes.
458-526: LGTM! Restart file loading correctly tested.The test validates that
adam_optcorrectly loads saved state (x, m, v, histories) from a restart file and raisesValueErrorfor missing restart files.
529-618: LGTM! Latest restart file selection and cleanup behavior tested.The test verifies that the optimizer selects the restart file with the highest iteration number and properly cleans up old restart files when restart is disabled.
tests/noise_char/test_propagation.py (2)
77-156: LGTM! Comprehensive propagator setup helper.The helper correctly initializes all components needed for propagation testing and returns them in a well-documented tuple for flexible test usage.
182-246: LGTM! Comprehensive error handling validation.The test thoroughly validates all expected
ValueErrorconditions:
- Noise model exceeding Hamiltonian sites
- Observable list exceeding Hamiltonian sites
- Running without setting observables
- Mismatched noise model at runtime
Each error message is verified using
re.escapefor safe regex matching.tests/noise_char/test_bayesian.py (4)
1-95: LGTM! Well-structured test infrastructure.The
MockLossClasscorrectly implements theLossClassinterface (callable returning(loss, grad, sim_time)tuple), and themake_losshelper provides clean type-checker compatibility. Warning suppression is appropriately documented as external library warnings.
97-135: LGTM! Comprehensive acquisition function tests.Tests cover all supported acquisition functions (LEI, PI, UCB) and properly verify the invalid name error case with
pytest.raises.
137-169: LGTM! Thorough basic functionality test.The test validates return types, shapes, bounds constraints, and evaluation count—all essential aspects of the Bayesian optimization interface.
227-353: LGTM! Good coverage of edge cases and parameter variations.Tests appropriately cover 1D/2D/3D inputs, custom parameters (dtype, device), negative bounds, and objective improvement behavior. The dtype assertion on line 242 correctly allows both float32 and float64.
src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/mcmc.py (2)
1-43: LGTM! Clear module documentation and function signature.The docstring comprehensively documents all parameters with appropriate defaults.
76-128: LGTM! Solid MCMC implementation.The algorithm correctly:
- Initializes and tracks global best separately from chain state
- Applies bounds clipping after proposals
- Implements standard Metropolis-Hastings acceptance
- Supports early stopping via patience counter
- Anneals both temperature and step size
tests/noise_char/test_characterizer.py (2)
30-102: LGTM! Clear parameter container for test configuration.The
Parametersclass provides sensible defaults for a lightweight test scenario (1 site, short simulation time). Docstring thoroughly documents all attributes.
199-218: LGTM! Test exercises the key Characterizer workflow.The test validates initialization, attribute types, and a single optimization iteration. The assertions confirm
optimal_modelis properly constructed afteradam_optimize.src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/bayesian.py (1)
159-214: LGTM! Core Bayesian optimization loop is well-structured.The implementation correctly:
- Scales inputs from unit cube to bounds
- Negates objectives for maximization-based GP
- Fits GP with proper transforms (Normalize, Standardize)
- Checks convergence flag for early stopping
- Returns values in original (minimization) scale
src/mqt/yaqs/noise_char/characterizer.py (2)
57-76: Class-level mutable default values can cause shared state issues.Declaring mutable defaults (
list[float] | None = None) at class level is fine when they default toNone, but for documentation clarity and to prevent accidental shared state if ever changed to[], consider initializing these in__init__instead.Current pattern is acceptable since defaults are
None, but this is worth noting for future maintenance.
212-357: LGTM! Clean delegation to optimization routines.The
cma_optimize,bayesian_optimize, andmcmc_optimizemethods properly delegate to their respective optimizers and consistently constructoptimal_modelfrom the best parameters.src/mqt/yaqs/noise_char/propagation.py (1)
317-335: LGTM! Simulator invocation and result extraction.The run method correctly constructs new
AnalogSimParams, invokes the simulator, and extracts observable trajectories.src/mqt/yaqs/noise_char/optimization_algorithms/gradient_based/adam.py (2)
141-206: LGTM! Well-implemented Adam optimizer with checkpointing.The optimization loop correctly implements:
- Standard Adam moment updates with bias correction
- Checkpointing at every iteration for restart capability
- Performance logging
- Multiple convergence criteria (tolerance and diff_avg_history)
117-121: Document that restart assumes prior evaluation history exists.Restoring
obs_arrayfrom the pickle assumes at least one evaluation has already occurred in the previous run (sinceself.obs_arrayis only created during__call__, not in__init__). Whileself.tis safely initialized in__init__,self.obs_arrayis not documented as a standard attribute. Add a comment clarifying that restart files are only valid when the previous run completed at least one evaluation, or validate that saved state includes the required attributes.
| work_dir = "test/scikit_characterizer/" | ||
| work_dir_path = Path(work_dir) | ||
| work_dir_path.mkdir(parents=True, exist_ok=True) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Prefer consistent use of Path objects.
Line 33 creates a string path, Line 34 converts it to a Path object, but Line 86 continues to use the string version. For consistency and cross-platform compatibility, prefer using work_dir_path throughout.
♻️ Suggested refactor for consistent Path usage
- work_dir = "test/scikit_characterizer/"
- work_dir_path = Path(work_dir)
+ work_dir_path = Path("test/scikit_characterizer/")
work_dir_path.mkdir(parents=True, exist_ok=True)And update Line 86:
- work_dir + "gammas.txt", ref_noise_model.strength_list, header="##", fmt="%.6f"
+ work_dir_path / "gammas.txt", ref_noise_model.strength_list, header="##", fmt="%.6f"Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In @docs/examples/characterizer_example.md around lines 33 - 36, The code mixes
a string path variable work_dir with a Path object work_dir_path; replace all
subsequent uses of the string (notably the usage around the original Line 86) to
use work_dir_path for consistency and cross-platform correctness, and where an
API requires a string pass work_dir_path.as_posix() or str(work_dir_path);
update any file/directory operations, joins, or reads/writes to reference
work_dir_path (or its string form when necessary).
| print("Optimizing ... ") | ||
| characterizer.adam_optimize(max_iterations=50) | ||
| print("Optimization completed.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Clarify that this example uses gradient-based optimization.
The PR title and objectives emphasize "Gradient-Free Noise Characterization," but this example demonstrates adam_optimize, which is a gradient-based method. While it's fine to include gradient-based examples, the documentation should clarify this distinction or show gradient-free alternatives (CMA, MCMC, Bayesian) as well.
Consider adding a note before Line 138:
Note: This example uses the Adam optimizer, which is a gradient-based method. For gradient-free optimization, you can use:
- `characterizer.cma_optimize(max_iterations=50)`
- `characterizer.mcmc_optimize(max_iterations=50)`
- `characterizer.bayesian_optimize(max_iterations=50)`🤖 Prompt for AI Agents
In @docs/examples/characterizer_example.md around lines 138 - 140, Add a short
clarifying note before the example's optimization block stating that
adam_optimize is a gradient-based method and provide the gradient-free
alternatives to use (cma_optimize, mcmc_optimize, bayesian_optimize) with the
same max_iterations argument; reference the existing call to
characterizer.adam_optimize(max_iterations=50) and suggest replacing or showing
alternatives like characterizer.cma_optimize(max_iterations=50),
characterizer.mcmc_optimize(max_iterations=50), or
characterizer.bayesian_optimize(max_iterations=50) so readers understand the
distinction.
|
|
||
| [[tool.mypy.overrides]] | ||
| module = ["qiskit.*", "qiskit_aer.*", "opt_einsum.*"] | ||
| module = ["qiskit.*", "qiskit_aer.*", "opt_einsum.*", "botorch.*", "gpytorch.*", "torch", "cma"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, find and examine the pyproject.toml file
find . -name "pyproject.toml" -type f | head -5Repository: munich-quantum-toolkit/yaqs
Length of output: 87
🏁 Script executed:
# Check the structure around line 113 and locate dependencies sections
grep -n "dependencies\|dependency-groups\|module.*botorch\|module.*gpytorch\|module.*torch\|module.*cma" pyproject.toml | head -30Repository: munich-quantum-toolkit/yaqs
Length of output: 245
🏁 Script executed:
# Get the full context of lines around 113
sed -n '100,130p' pyproject.tomlRepository: munich-quantum-toolkit/yaqs
Length of output: 863
🏁 Script executed:
# Search for imports of the optimization libraries with context
echo "=== Searching for botorch imports ==="
rg -n --type=py 'import botorch|from botorch' -C3
echo "=== Searching for gpytorch imports ==="
rg -n --type=py 'import gpytorch|from gpytorch' -C3
echo "=== Searching for torch imports ==="
rg -n --type=py 'import torch|from torch' -C3
echo "=== Searching for cma imports ==="
rg -n --type=py '\bimport cma\b|from cma' -C3Repository: munich-quantum-toolkit/yaqs
Length of output: 5416
🏁 Script executed:
# View the dependencies section
sed -n '38,80p' pyproject.tomlRepository: munich-quantum-toolkit/yaqs
Length of output: 1121
🏁 Script executed:
# View the dependency-groups section
sed -n '182,220p' pyproject.tomlRepository: munich-quantum-toolkit/yaqs
Length of output: 629
🏁 Script executed:
# Check if imports in bayesian.py are actually guarded
sed -n '1,40p' src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/bayesian.pyRepository: munich-quantum-toolkit/yaqs
Length of output: 1346
🏁 Script executed:
# Check if imports in cma.py are actually guarded
sed -n '1,40p' src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/cma.pyRepository: munich-quantum-toolkit/yaqs
Length of output: 1318
Add botorch, gpytorch, torch, and cma as optional dependencies.
The mypy overrides ignore missing imports for botorch, gpytorch, torch, and cma, but these libraries are not declared in the project's dependencies or dependency-groups sections. These libraries are imported at module level without try/except guards in:
src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/bayesian.py(lines 20-30)src/mqt/yaqs/noise_char/optimization_algorithms/gradient_free/cma.py(line 26)
Users will encounter runtime ImportErrors when attempting to use the Bayesian and CMA-ES optimization algorithms without these libraries installed. Add them as optional dependencies:
Suggested addition to pyproject.toml
[dependency-groups]
optimization = [
"botorch>=0.9.0",
"gpytorch>=1.11",
"torch>=2.0.0",
"cma>=3.2.0",
]| for i, proc in enumerate(self.compact_processes): | ||
| assert "name" in proc, "Each process must have a 'name' key" | ||
| name = proc["name"] | ||
| if not hasattr(GateLibrary, name): | ||
| msg = f"Gate '{name}' not found in GateLibrary" | ||
| raise ValueError(msg) | ||
| msg = "Only 1-site noise processes are supported in CompactNoiseModel" | ||
| assert getattr(GateLibrary, name)().interaction == 1, msg | ||
| assert "sites" in proc, "Each process must have a 'sites' key" | ||
| assert "strength" in proc, "Each process must have a 'strength' key" | ||
|
|
||
| for site in proc["sites"]: | ||
| self.expanded_processes.append({"name": proc["name"], "sites": [site], "strength": proc["strength"]}) | ||
| self.index_list.append(i) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Validation order could be optimized to fail fast.
Currently, the code validates name key presence (line 237), then checks GateLibrary (line 239), then validates interaction (line 243), and finally checks sites and strength (lines 244-245). If sites or strength are missing, the code will still instantiate the gate before failing. Consider checking all required keys first.
Proposed reordering for fail-fast validation
for i, proc in enumerate(self.compact_processes):
assert "name" in proc, "Each process must have a 'name' key"
+ assert "sites" in proc, "Each process must have a 'sites' key"
+ assert "strength" in proc, "Each process must have a 'strength' key"
name = proc["name"]
if not hasattr(GateLibrary, name):
msg = f"Gate '{name}' not found in GateLibrary"
raise ValueError(msg)
msg = "Only 1-site noise processes are supported in CompactNoiseModel"
assert getattr(GateLibrary, name)().interaction == 1, msg
- assert "sites" in proc, "Each process must have a 'sites' key"
- assert "strength" in proc, "Each process must have a 'strength' key"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for i, proc in enumerate(self.compact_processes): | |
| assert "name" in proc, "Each process must have a 'name' key" | |
| name = proc["name"] | |
| if not hasattr(GateLibrary, name): | |
| msg = f"Gate '{name}' not found in GateLibrary" | |
| raise ValueError(msg) | |
| msg = "Only 1-site noise processes are supported in CompactNoiseModel" | |
| assert getattr(GateLibrary, name)().interaction == 1, msg | |
| assert "sites" in proc, "Each process must have a 'sites' key" | |
| assert "strength" in proc, "Each process must have a 'strength' key" | |
| for site in proc["sites"]: | |
| self.expanded_processes.append({"name": proc["name"], "sites": [site], "strength": proc["strength"]}) | |
| self.index_list.append(i) | |
| for i, proc in enumerate(self.compact_processes): | |
| assert "name" in proc, "Each process must have a 'name' key" | |
| assert "sites" in proc, "Each process must have a 'sites' key" | |
| assert "strength" in proc, "Each process must have a 'strength' key" | |
| name = proc["name"] | |
| if not hasattr(GateLibrary, name): | |
| msg = f"Gate '{name}' not found in GateLibrary" | |
| raise ValueError(msg) | |
| msg = "Only 1-site noise processes are supported in CompactNoiseModel" | |
| assert getattr(GateLibrary, name)().interaction == 1, msg | |
| for site in proc["sites"]: | |
| self.expanded_processes.append({"name": proc["name"], "sites": [site], "strength": proc["strength"]}) | |
| self.index_list.append(i) |
🤖 Prompt for AI Agents
In @src/mqt/yaqs/core/data_structures/noise_model.py around lines 236 - 249, The
loop over self.compact_processes should validate required keys before touching
GateLibrary or instantiating gates: first assert "name", "sites", and "strength"
are present on proc, then check GateLibrary membership for proc["name"], then
retrieve the gate class once (e.g., cls = getattr(GateLibrary, name)) and assert
cls().interaction == 1 before expanding sites into self.expanded_processes and
appending to self.index_list; reorder the checks in the loop to perform key
presence checks first to fail fast and avoid creating gate instances when keys
are missing.
| # | ||
| # Licensed under the MIT License | ||
|
|
||
| """This module contains the optimization routines for noise characterization.""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Docstring scope is incomplete.
The module docstring describes only "optimization routines," but per the PR context, this module provides a comprehensive noise characterization subsystem including propagators, loss evaluators, a high-level Characterizer orchestrator, and multiple optimization strategies (both gradient-based and gradient-free). Consider revising the docstring to reflect the broader scope.
🤖 Prompt for AI Agents
In @src/mqt/yaqs/noise_char/__init__.py at line 8, The module docstring is too
narrow—update the top-level docstring in src/mqt/yaqs/noise_char/__init__.py to
describe the full noise characterization subsystem (not just "optimization
routines"): mention that the module exports propagators, loss evaluators, the
high-level Characterizer orchestrator, and multiple optimization strategies
(gradient-based and gradient-free), and briefly state their roles so readers
know the available APIs and responsibilities.
| def make_loss(obj: object) -> LossClass: | ||
| """Treat a simple callable/object as a LossClass for static type checking. | ||
| Args: | ||
| obj: The object to cast to LossClass. | ||
| Returns: | ||
| The object cast to LossClass type. | ||
| """ | ||
| return cast("LossClass", obj) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Consider extracting make_loss to a shared test utility.
This helper is duplicated in test_cma.py (lines 106-115) and here. Extracting it to a shared conftest.py or test utilities module would reduce duplication and centralize maintenance.
🤖 Prompt for AI Agents
In @tests/noise_char/test_mcmc.py around lines 82 - 91, The helper function
make_loss is duplicated across tests; extract it into a shared test utility
(e.g., tests/conftest.py or tests/utils.py) and replace local copies with an
import or fixture. Move the def make_loss(obj: object) -> LossClass
implementation into the shared module, export it, then update the tests (both
test_mcmc.py and test_cma.py) to import make_loss from that module (or register
it as a pytest fixture) and remove the duplicated definitions in each test file.
| from __future__ import annotations | ||
|
|
||
| import contextlib | ||
| import pickle # noqa: S403 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Remove unused noqa directive.
The static analysis tool flagged this as an unused directive (RUF100). The S403 rule (pickle security) is not enabled, so the directive has no effect.
Proposed fix
-import pickle # noqa: S403
+import pickle📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import pickle # noqa: S403 | |
| import pickle |
🧰 Tools
🪛 Ruff (0.14.10)
13-13: Unused noqa directive (non-enabled: S403)
Remove unused noqa directive
(RUF100)
🤖 Prompt for AI Agents
In @tests/noise_char/test_optimization.py at line 13, Remove the unused inline
noqa by editing the import of pickle in tests/noise_char/test_optimization.py:
change the line importing pickle (the symbol "pickle") to not include the
trailing "# noqa: S403" since the S403 rule isn't enabled and the directive is
unnecessary; simply keep the import statement without the noqa comment.
| for file_path in loss_function.work_dir.glob("opt_traj*.txt"): | ||
| with contextlib.suppress(Exception): | ||
| file_path.unlink() | ||
|
|
||
| for file_path in loss_function.work_dir.glob("restart*.pkl"): | ||
| with contextlib.suppress(Exception): | ||
| file_path.unlink() | ||
|
|
||
| for file_path in loss_function.work_dir.glob("performance*.txt"): | ||
| with contextlib.suppress(Exception): | ||
| file_path.unlink() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Test cleanup could use pytest's tmp_path fixture semantics.
The manual file cleanup using contextlib.suppress is functional, but pytest's tmp_path fixture automatically cleans up after each test. This cleanup code may be unnecessary unless files need to be removed mid-test for specific assertions.
🤖 Prompt for AI Agents
In @tests/noise_char/test_optimization.py around lines 445 - 455, The manual
cleanup loop using loss_function.work_dir.glob("opt_traj*.txt"),
("restart*.pkl"), and ("performance*.txt") is redundant with pytest's tmp_path
fixture; remove these three cleanup blocks and ensure the test uses a tmp_path
(or tmp_path_factory) for loss_function.work_dir so pytest auto-cleans between
tests, or if mid-test removal is actually required keep only the specific unlink
calls and add a short comment explaining why manual cleanup is necessary.
| class Parameters: | ||
| """Container for default test parameters used in a lightweight open-quantum-system propagation test.""" | ||
|
|
||
| def __init__(self) -> None: | ||
| """Initialize default test simulation parameters. | ||
| This constructor sets up a collection of attributes used for running a simple | ||
| open-quantum-system propagation test. Attributes and their meanings: | ||
| - sites (int): Number of sites/spins. Default: 1. | ||
| - sim_time (float): Total simulation time. Default: 0.6. | ||
| - dt (float): Time step for propagation. Default: 0.2. | ||
| - order (int): Integration/order parameter for the propagator. Default: 1. | ||
| - threshold (float): Numerical/truncation tolerance used in algorithms. Default: 1e-4. | ||
| - ntraj (int): Number of trajectories to average over (stochastic methods). Default: 1. | ||
| - max_bond_dim (int): Maximum bond dimension for tensor-network representations. Default: 4. | ||
| - j (float): Coupling constant used in the model Hamiltonian. Default: 1. | ||
| - g (float): Local field (e.g., transverse field) parameter. Default: 0.5. | ||
| - times (np.ndarray): 1-D array of time points computed as np.arange(0, sim_time + dt, dt). | ||
| - n_obs (int): Number of observables (3 per site for Pauli x, y, z). Computed as sites * 3. | ||
| - n_jump (int): Number of jump operators (2 per site, e.g., lowering and Pauli-Z). Computed as sites * 2. | ||
| - n_t (int): Number of time points (len(times)). | ||
| - gamma_rel (float): Relaxation (dissipative) rate. Default: 0.1. | ||
| - gamma_deph (float): Dephasing rate. Default: 0.15. | ||
| - d (int): Local Hilbert-space dimension (e.g., spin-1/2 -> 2). Default: 2. | ||
| Notes: | ||
| - The provided defaults are chosen for lightweight tests and can be modified | ||
| on the instance after construction if different test scenarios are required. | ||
| - The 'times' array explicitly includes the final time by using sim_time + dt | ||
| as the stop value in np.arange. | ||
| """ | ||
| self.sites = 1 | ||
| self.sim_time = 0.6 | ||
| self.dt = 0.2 | ||
| self.order = 1 | ||
| self.threshold = 1e-4 | ||
| self.ntraj = 1 | ||
| self.max_bond_dim = 4 | ||
| self.j = 1 | ||
| self.g = 0.5 | ||
|
|
||
| self.times = np.arange(0, self.sim_time + self.dt, self.dt) | ||
|
|
||
| self.n_obs = self.sites * 3 # x, y, z for each site | ||
| self.n_jump = self.sites * 2 # lowering and pauli_z for each site | ||
| self.n_t = len(self.times) | ||
|
|
||
| self.gamma_rel = 0.1 | ||
| self.gamma_deph = 0.15 | ||
|
|
||
| self.d = 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Consider extracting shared Parameters class to a conftest.py.
This class is nearly identical to the one in test_optimization.py. Extracting it to a shared conftest.py or test utilities module would reduce duplication and ensure consistency across test files.
🤖 Prompt for AI Agents
In @tests/noise_char/test_propagation.py around lines 24 - 74, The Parameters
class is duplicated across tests; move it to a shared test fixture by extracting
the class (Parameters) into tests/conftest.py (or a test utilities module) and
import or provide it as a fixture in both test_propagation.py and
test_optimization.py; update the tests to either import Parameters from the
shared module or declare a fixture named parameters that returns a Parameters()
instance so both test files use the single shared definition and eliminate
duplication.
| def test_propagatorwithgradients_runs() -> None: | ||
| """Test that `propagation.tjm_traj` executes correctly and returns expected output shapes. | ||
| This test verifies that: | ||
| - The function can be called with a valid `SimulationParameters` instance. | ||
| - The returned values `t`, `original_exp_vals`, and `d_on_d_gk` are NumPy arrays. | ||
| - The shapes of the outputs match the expected dimensions based on simulation parameters. | ||
| - The average minimum and maximum trajectory time is returned as a list of None values. | ||
| """ | ||
| # Prepare SimulationParameters | ||
| test = Parameters() | ||
|
|
||
| _, _, _obs_list, _, ref_noise_model, propagator = create_propagator_instance(test) | ||
|
|
||
| propagator.run(ref_noise_model) | ||
|
|
||
| assert isinstance(propagator.times, np.ndarray) | ||
| assert isinstance(propagator.obs_array, np.ndarray) | ||
|
|
||
| assert propagator.times.shape == (test.n_t,) | ||
| assert propagator.obs_array.shape == (test.n_obs, test.n_t) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Test function name references outdated class name.
The function test_propagatorwithgradients_runs and its docstring reference propagation.tjm_traj, but the code tests propagation.Propagator. Consider renaming for clarity.
Proposed rename
-def test_propagatorwithgradients_runs() -> None:
- """Test that `propagation.tjm_traj` executes correctly and returns expected output shapes.
+def test_propagator_runs() -> None:
+ """Test that `propagation.Propagator.run` executes correctly and returns expected output shapes.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def test_propagatorwithgradients_runs() -> None: | |
| """Test that `propagation.tjm_traj` executes correctly and returns expected output shapes. | |
| This test verifies that: | |
| - The function can be called with a valid `SimulationParameters` instance. | |
| - The returned values `t`, `original_exp_vals`, and `d_on_d_gk` are NumPy arrays. | |
| - The shapes of the outputs match the expected dimensions based on simulation parameters. | |
| - The average minimum and maximum trajectory time is returned as a list of None values. | |
| """ | |
| # Prepare SimulationParameters | |
| test = Parameters() | |
| _, _, _obs_list, _, ref_noise_model, propagator = create_propagator_instance(test) | |
| propagator.run(ref_noise_model) | |
| assert isinstance(propagator.times, np.ndarray) | |
| assert isinstance(propagator.obs_array, np.ndarray) | |
| assert propagator.times.shape == (test.n_t,) | |
| assert propagator.obs_array.shape == (test.n_obs, test.n_t) | |
| def test_propagator_runs() -> None: | |
| """Test that `propagation.Propagator.run` executes correctly and returns expected output shapes. | |
| This test verifies that: | |
| - The function can be called with a valid `SimulationParameters` instance. | |
| - The returned values `t`, `original_exp_vals`, and `d_on_d_gk` are NumPy arrays. | |
| - The shapes of the outputs match the expected dimensions based on simulation parameters. | |
| - The average minimum and maximum trajectory time is returned as a list of None values. | |
| """ | |
| # Prepare SimulationParameters | |
| test = Parameters() | |
| _, _, _obs_list, _, ref_noise_model, propagator = create_propagator_instance(test) | |
| propagator.run(ref_noise_model) | |
| assert isinstance(propagator.times, np.ndarray) | |
| assert isinstance(propagator.obs_array, np.ndarray) | |
| assert propagator.times.shape == (test.n_t,) | |
| assert propagator.obs_array.shape == (test.n_obs, test.n_t) |
🤖 Prompt for AI Agents
In @tests/noise_char/test_propagation.py around lines 159 - 179, Rename the test
function and update its docstring to reflect the current API: change the test
function name test_propagatorwithgradients_runs to something like
test_propagator_runs and update the docstring references from
propagation.tjm_traj to propagation.Propagator (or the Propagator.run behavior)
so the name and description match the code that uses create_propagator_instance
and propagator.run; ensure any mentions of gradients or tjm_traj are removed or
revised to describe propagator.times and propagator.obs_array assertions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review continued from previous batch...
| def lineal_function_1000(_i: int) -> int: | ||
| """Return a constant value of 1000. | ||
| This function takes an input parameter and returns a fixed value of 1000, | ||
| regardless of the input value. | ||
| Args: | ||
| _i (int): An integer parameter (unused in the calculation). | ||
| Returns: | ||
| int: The constant value 1000. | ||
| """ | ||
| return 1000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function name typo: "lineal" should be "linear" or "constant".
The function name lineal_function_1000 contains a typo. Since the function returns a constant value regardless of input, consider either:
- Renaming to
linear_function_1000if you plan to make it actually linear (e.g.,return _i * 1000) - Renaming to
constant_function_1000to accurately reflect its current behavior
📝 Suggested fix
-def lineal_function_1000(_i: int) -> int:
- """Return a constant value of 1000.
+def constant_function_1000(_i: int) -> int:
+ """Return a constant value of 1000.Then update the default parameter in __init__:
- num_traj: Callable[[int], int] = lineal_function_1000,
+ num_traj: Callable[[int], int] = constant_function_1000,Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In @src/mqt/yaqs/noise_char/loss.py around lines 28 - 40, The function name
lineal_function_1000 is misspelled and misleading because it returns a constant;
rename it to constant_function_1000 (or linear_function_1000 if you change
implementation to be linear) and update all references, including the default
parameter in the class __init__ that currently points to lineal_function_1000;
ensure the function signature remains (i: int) -> int and behavior unchanged if
choosing constant_function_1000, and run tests to catch any missed references.
|
|
||
| self.propagator.set_observable_list(ref_traj) | ||
|
|
||
| self.ref_traj_array = copy.deepcopy(np.array([obs.results for obs in self.ref_traj])) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
find . -name "loss.py" -type f | head -5Repository: munich-quantum-toolkit/yaqs
Length of output: 104
🏁 Script executed:
cat -n ./src/mqt/yaqs/noise_char/loss.py | head -150Repository: munich-quantum-toolkit/yaqs
Length of output: 7161
🏁 Script executed:
rg -n "class Observable" --type=py -A 10Repository: munich-quantum-toolkit/yaqs
Length of output: 1046
🏁 Script executed:
sed -n '41,150p' src/mqt/yaqs/core/data_structures/simulation_parameters.pyRepository: munich-quantum-toolkit/yaqs
Length of output: 4733
🏁 Script executed:
find . -name "propagation.py" -o -name "*propagat*.py" | grep -v __pycache__ | head -10Repository: munich-quantum-toolkit/yaqs
Length of output: 150
🏁 Script executed:
rg -n "class Propagator" ./src/mqt/yaqs/noise_char/propagation.py -A 5Repository: munich-quantum-toolkit/yaqs
Length of output: 380
🏁 Script executed:
rg -n "def set_observable_list" ./src/mqt/yaqs/noise_char/propagation.py -A 15Repository: munich-quantum-toolkit/yaqs
Length of output: 965
🏁 Script executed:
sed -n '199,250p' ./src/mqt/yaqs/noise_char/propagation.pyRepository: munich-quantum-toolkit/yaqs
Length of output: 2793
🏁 Script executed:
rg -n "\.initialize\(" ./src/mqt/yaqs/noise_char/propagation.py -B 3 -A 3Repository: munich-quantum-toolkit/yaqs
Length of output: 53
🏁 Script executed:
rg -n "\.initialize\(" ./src/mqt/yaqs/noise_char/ -B 3 -A 3Repository: munich-quantum-toolkit/yaqs
Length of output: 53
🏁 Script executed:
rg -n "LossClass\(" ./tests/ -B 5 -A 5Repository: munich-quantum-toolkit/yaqs
Length of output: 2479
🏁 Script executed:
rg -n "def run\(" ./src/mqt/yaqs/noise_char/propagation.py -A 30Repository: munich-quantum-toolkit/yaqs
Length of output: 1893
🏁 Script executed:
rg -n "self.obs_traj =" ./src/mqt/yaqs/noise_char/propagation.py -B 5 -A 5Repository: munich-quantum-toolkit/yaqs
Length of output: 468
🏁 Script executed:
rg -n "obs.initialize\|observable.initialize" ./src/mqt/yaqs/ -B 3 -A 3Repository: munich-quantum-toolkit/yaqs
Length of output: 53
🏁 Script executed:
rg -n "initialize\(" ./src/mqt/yaqs/ -B 2 -A 2 | grep -A 5 -B 5 "Observable\|observable"Repository: munich-quantum-toolkit/yaqs
Length of output: 2803
🏁 Script executed:
rg -n "propagator.run\|ref_traj=" ./tests/noise_char/ -B 2 -A 2 | head -100Repository: munich-quantum-toolkit/yaqs
Length of output: 53
Add validation to ensure reference trajectory observables have populated results before creating array.
Line 106 creates ref_traj_array by accessing obs.results for each observable in self.ref_traj. Although the Observable class initializes results to None and populates them during simulator.run(), this constructor does not validate that the observables have been initialized. If observables are passed without first running propagator.run(), the array will contain None values, causing silent data corruption or downstream errors in loss computation.
Add a validation check such as:
if any(obs.results is None for obs in self.ref_traj):
raise ValueError("Reference trajectory observables must have results populated. "
"Ensure propagator.run() has been called before LossClass instantiation.")Alternatively, document this precondition clearly in the docstring (lines 62-69).
Description
Noise characterization modules for the optimization of the decay rates in the Lindblad equation. The following gradient-free optimization methods are available: mcmc, cma, bayesian
Checklist: