-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Approximate then project hyper-reduction API #135
Comments
@eparish1 I think this looks good, thanks for taking a first stab at this! |
I like this implementation. It's straightforward and looks like it'll be easy to use. |
I think in general it looks good. I'm curious if we want to have the sample_indices stuff explicit vs making get_sample_indices() abstract and for deim, that would look like |
@jtencer Yeah, I agree. There isn't really much reason for having it explicit, it probably makes more sense to keep it generic which also avoids the need for class inheritance. |
if ALL or most approx-then-project methods are based on class ApproximateThenProjectHyperReducer(abc.ABC):
@abc.abstractmethod
def __init__(self):
self.sample_indices_ = None
self.test_basis_ = None
def get_sample_indices(self):
return self.sample_indices_
def get_hyper_reduced_test_basis(self):
return self.test_basis_
def __call__(self, test_basis, function_snapshots, truncater): ) -> np.ndarray:
self.function_vetor_space_ = romtools.VectorSpaceFromPOD(function_snapshots, truncater=truncater)
self.function_basis_ = self.function_vetor_space_.get_basis()
self.sample_indices_ = self._get_indices(self.function_basis_[0])
# do the other steps
class ScalarDEIM(ApproximateThenProjectHyperReducer):
def __init__(self, ...):
# ....
def _get_indices(self, a):
# ... so that we only speciailize the various oprations rather then the whole thing |
@fnrizzi For our "base" API, I want to have more flexibility than what you posted. Things that will change from method to method:
I'm very good with having what you posted be an intermediate class, and then have things derived from it, but I want a more generic base. |
ok point taken, but why then using a class rather than a function? |
My main thought is for this function:
In practice, this will involve accessing the function space, sample indices, and logic may change for different algorithms. I think it feels cleaner to keep this together. What do you think? |
def GalerkinApproximateThenProjectHyperReducer(residual_snapshots : np.ndarray, test_basis: np.ndarray, target_sample_mesh_fraction: float , method='DEIM') -> np.ndarray, np.ndarray:
'''
For a function f approximated as Phi_f pinv(P \Phi_f) P f, the purpose is to return
Psi^T Phi_f pinv(P \Phi_f), where Psi^T is the "test" basis
along with the sample mesh indices
Inputs: (n_vars, nx, n_snaps) np.ndarray, residual snapshots
float, sample mesh fraction
Outputs:
(n_vars, ns , K) np.ndarray, hyper_reduced_test_basis, where ns is the number of sample mesh points
ns, np.ndarray: sample mesh points
'''
pass
def LspgApproximateThenProjectHyperReducer(residual_snapshots : np.ndarray, target_sample_mesh_fraction: float , method='DEIM') -> np.ndarray, np.ndarray:
'''
For a function f approximated as Phi_f pinv(P \Phi_f) P f, the purpose is to return
Psi^T Phi_f pinv(P \Phi_f), where Psi^T is the "test" basis
along with the sample mesh indices
Inputs: (n_vars, nx, n_snaps) np.ndarray, residual snapshots
float, sample mesh fraction
Outputs:
(n_vars, ns , K) np.ndarray, hyper_reduced_test_basis, where ns is the number of sample mesh points
ns, np.ndarray: sample mesh points
'''
pass |
We should decide (1) if we want a common API for these methods and, if we do, (2) what it looks like. The main approximate then project hyper-reduction algorithms are:
My first stab is this:
@jtencer @pjb236 @fnrizzi @ekrathSNL Thoughts?
The text was updated successfully, but these errors were encountered: