-
Notifications
You must be signed in to change notification settings - Fork 57
Abandoned; moved to another branch (improvements to TorchForwardSimulator) #479
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The creation of COPA layouts relies on a number of specialized circuit structures which require non-trivial time to construct. In the context of iterative GST estimation with nested circuit lists (i.e. the default) this results in unnecessarily repeat construction of these objects. This is an initial implementation of a caching scheme allowing for more efficient re-use of these circuit structures across iterations.
Cache the expanded SPAM-free circuits to reduce recomputing things unnecessarily.
This updates the implementation of the SeparatePOVMCircuit containter class. The most important change is adding an attribute for the full_effect_labels that avoids uneeded reconstruction. To add protection then, to ensure that this is kept in sync with everything else, the povm_label and effect_labels attributes (which feed into full_effect_labels) have been promoted to properties with setters that ensure the full_effect_labels are kept synced.
Adds a new method to OpModel that allows for doing instrument expansion and povm expansion in bulk, speeding things up be avoiding recomputation of shared quantities. Also adds a pipeline for re-using completed or split circuits (as produced by the related OpModel methods) for more efficient re-use of done work.
Some minor performance oriented tweaks to the init for COPA layouts.
Refactor some of the ordered dictionaries in matrix layout creation into regular ones.
Start adding infrastructure for caching things used in MDC store creation and for plumbing in stuff from layout creation.
Performance optimization for the method for adding omitted frequencies to incorporate caching of the number of outcomes per circuit (which is somewhat expensive since it goes through the instrument/povm expansion code). Additionally refactor some other parts of this code for improved efficiency. Also makes a few minor tweaks to the method for adding counts to speed that up as well. Can probably make this a bit faster still by merging the two calls to reduce redundancy, but that is a future us problem. Additionally make a few microoptimizations to the dataset code for grabbing counts, and to slicetools adding a function for directly giving a numpy array for a slice (instead of needing to cast from a list). Miscellaneous cleanup of old commented out code that doesn't appear needed any longer.
Fix a bug I introduced in dataset indexing into something that could be None.
Another minor bug caught by testing.
Not sure why this didn't get caught on the circuit update branch, but oh well...
Fixes minor error in split_circuits.
Improve the performance of __getitem__ when indexing into static circuits by making use of the _copy_init code path.
Implement caching of circuit structures tailored to the map forward simulator's requirements.
This finishes the process of refactoring expand_instruments_and_separate_povm from a circuit method to a method of OpModel.
Refactor expand_instruments_and_separate_povm to use the multi-circuit version under the hood to reduce code duplication.
Refactor cache creation functions into static methods of the corresponding forward simulator class. Also add an empty base version of this method, and clean up a few miscellaneous things caught by review.
Includes a number of performance improvements and refinements to the implementation of the KM tree partitioning algorithm. Changes include: - More efficient re-use of computed subtree weights and level partitions - A custom copying function that avoids the use of the incredibly slow deepcopy function. - Less copying in general by changing when graph modifications are applied. -Bisection instead of linear search for getting initial KM partition.
Change the default atom count heuristic so that only a single atom is created when there is a single processor and no memory limit.
Cleans up the lindbladerrorgen.py module. -Removes large blocks of old commented out code and debug statements. - Adds new docstrings for methods that were previously missing them. - First pass at bringing the existing docstrings up to date with current implementation.
Update the implementation of the error generator representation update code for the dense rep case. The results are functionally identical, but are measurably faster. (Einsum is ~2-3X faster than tensordot for this particular case, e.g.). We also now do the entire error generator construction in a single shot instead of block-by-block to get additional benefits from vectorization.
The start of composed effect was 300 lines of old commented out implementation. This commit is simply to remove that fluff.
Refactor the single parameter `set_parameter_value` method to call the multi-parameter implementation under the hood. Add additional performance tweaks to the logic for determining when to update an element of the cache. What I came up with was that when the layer rules are the ExplicitLayerRules and we are updating an effect which is known to belong to a POVM which has already been updated then we can skip the cache update for those effects.
Add a few tweaks to the PrefixTable splitting algorithm to support multiple native state preps. Also fixes a bug for __contains__ comparisons between LabelTupTup and LabelStr. Add support for setting model parameter values by their parameter labels.
Can now specify a parameter label in addition to an integer index for model parameter updates.
Fixes and inefficiency in dm_mapfill_probs which was resulting in effect reps being recalculated unnecessarily, which was a big performance penalty, especially for composed error generator type reps.
Slightly more efficient parity implementation (fewer operations), and add the compiler hint to inline the parity function. In profiling this makes a surprisingly big difference.
Adds in pickle handling for circuits to address an issue with hash randomization.
This commit makes changes to the implementation of the new prefix table splitting algorithm in an attempt to make it deterministic. Previously we made direct use of a number of networkx iterators which turned out to be set like, and as such they had nondeterministic order for their returned values. This is fine in the single threaded setting, but with MPI this meant we would end up with different splittings for the different ranks (I expect there to be many degenerate solutions to the splitting problem, so this isn't a crazy thing to see). This resulted in bugs when using MPI. Hopefully this should fix things...
This reverts commit 5f34a9a.
Add checks for the existence of a model parameter interposer when using the circuit parameter dependence code. Currently that option is not supported.
Attempt at updating the default atom count heuristic to favor having the same number of atoms as processors. Should be revisited at some point to confirm it performs as anticipated.
Remove some profiling and debug bindings for cython extensions.
This is my first pass at updating the default evotype behavior for casting so that we prefer dense representations when using a small number of qubits. Right now the threshold is arbitrarily set to 3 qubits, but this should be reevaluated as needed.
Fix the evotype casting I accidentally broke with typo.
Change the default maximum cache size from 0 to None for the map forward simulator.
…uit outcome probabilities in a slightly more vectorized way.
Moved to PR #613 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.