Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Approximate then project hyper-reduction API #135

Open
eparish1 opened this issue Feb 14, 2024 · 9 comments
Open

Approximate then project hyper-reduction API #135

eparish1 opened this issue Feb 14, 2024 · 9 comments

Comments

@eparish1
Copy link
Contributor

eparish1 commented Feb 14, 2024

We should decide (1) if we want a common API for these methods and, if we do, (2) what it looks like. The main approximate then project hyper-reduction algorithms are:

  • DEIM
  • Gappy POD w/ differing sampling strategies (oversampled DEIM, random sampling, eigen-vector sampling, etc.)

My first stab is this:

class ApproximateThenProjectHyperReducer(abc.ABC):
    @abc.abstractmethod
    def __init__(self):
        ''' 
        Members:
          sample_indices_ : ns sized integer array w/ sample mesh indices, where ns is the number of sample mesh points 
        '''

    def get_sample_indices(self):
        return self.sample_indices_

    @abc.abstractmethod
    def get_hyper_reduced_test_basis(self, test_basis : np.ndarray ) -> np.ndarray:
        '''
        For a function f approximated as Phi_f  pinv(P \Phi_f) P f, the purpose is to return 
        Psi^T Phi_f  pinv(P \Phi_f), where Psi^T is the "test" basis 
        Different methods may implement this product different, so we keep this abstract

        Inputs: (n_vars, nx, K) np.ndarray, test basis
        Outputs: (n_vars, ns , K) np.ndarray, hyper_reduced_test_basis, where ns is the number of sample mesh points 
        '''
        pass


#A concrete implementation would look something like this:
class ScalarDEIM(ApproximateThenProjectHyperReducer):
    '''
    Scalar valued DEIM hyper-reducer. We only support ScalarValuedDEIM for a scalar valued system
    '''

    def __init__(self,function_snapshots,truncater):
        assert function_snapshots.shape[0] == 1 , "ScalarDeim can only be used for single state systems, your data tensor has " + str(U.shape[0]) + " variables. Try VectorDEIM"
        self.function_vetor_space_ = romtools.VectorSpaceFromPOD(function_snapshots,truncater=truncater)
        self.function_basis_ = self.function_vetor_space_.get_basis()
        self.sample_indices_  = deim_get_indices(self.function_basis_[0])
                                   
    def get_hyper_reduced_test_basis(self,test_basis):
        deim_test_basis = deim_get_test_basis(test_basis[0],self.function_basis_[0],self.sample_indices_)
        return deim_test_basis

@jtencer @pjb236 @fnrizzi @ekrathSNL Thoughts?

@pjb236
Copy link
Contributor

pjb236 commented Feb 14, 2024

@eparish1 I think this looks good, thanks for taking a first stab at this!

@ekrathSNL
Copy link
Collaborator

I like this implementation. It's straightforward and looks like it'll be easy to use.

@jtencer
Copy link
Contributor

jtencer commented Feb 15, 2024

I think in general it looks good. I'm curious if we want to have the sample_indices stuff explicit vs making get_sample_indices() abstract and for deim, that would look like return deim_get_indices(self.function_basis_[0]) or some such.

@eparish1
Copy link
Contributor Author

@jtencer Yeah, I agree. There isn't really much reason for having it explicit, it probably makes more sense to keep it generic which also avoids the need for class inheritance.

@fnrizzi
Copy link
Member

fnrizzi commented Feb 15, 2024

if ALL or most approx-then-project methods are based on Psi^T Phi_f pinv(P \Phi_f), then rather than having the base class as just a pure interface, i would say that this is one instance where we could put the "logic" of operations inside the base class and have the concrete only specialize the various pieces.
Something like this:

class ApproximateThenProjectHyperReducer(abc.ABC):
    @abc.abstractmethod
    def __init__(self):
      self.sample_indices_ = None
      self.test_basis_ = None
      
    def get_sample_indices(self):
        return self.sample_indices_

    def get_hyper_reduced_test_basis(self):
        return self.test_basis_

    def __call__(self, test_basis, function_snapshots, truncater): ) -> np.ndarray:
        self.function_vetor_space_ = romtools.VectorSpaceFromPOD(function_snapshots, truncater=truncater)
        self.function_basis_ = self.function_vetor_space_.get_basis()
        self.sample_indices_  = self._get_indices(self.function_basis_[0])
        # do the other steps        


class ScalarDEIM(ApproximateThenProjectHyperReducer):
    def __init__(self, ...): 
        # .... 
        
    def _get_indices(self, a):
        # ...                                   

so that we only speciailize the various oprations rather then the whole thing

@eparish1
Copy link
Contributor Author

@fnrizzi For our "base" API, I want to have more flexibility than what you posted. Things that will change from method to method:

  1. How the vector space is constructed. (Also, I don't want to have a vector space be passed to this class; I want to control the construction of it)
  2. How Psi^T Phi_f pinv(P \Phi_f) is computed. As a specific example, I am thinking of a split vector space where I don't want to waste a lot of time on zero ops; this also makes the code a lot easier.

I'm very good with having what you posted be an intermediate class, and then have things derived from it, but I want a more generic base.

@fnrizzi
Copy link
Member

fnrizzi commented Feb 15, 2024

ok point taken, but why then using a class rather than a function?

@eparish1
Copy link
Contributor Author

My main thought is for this function:

    @abc.abstractmethod
    def get_hyper_reduced_test_basis(self, test_basis : np.ndarray ) -> np.ndarray:
        '''
        For a function f approximated as Phi_f  pinv(P \Phi_f) P f, the purpose is to return 
        Psi^T Phi_f  pinv(P \Phi_f), where Psi^T is the "test" basis 
        Different methods may implement this product different, so we keep this abstract

        Inputs: (n_vars, nx, K) np.ndarray, test basis
        Outputs: (n_vars, ns , K) np.ndarray, hyper_reduced_test_basis, where ns is the number of sample mesh points 
        '''
        pass

In practice, this will involve accessing the function space, sample indices, and logic may change for different algorithms. I think it feels cleaner to keep this together. What do you think?

@eparish1
Copy link
Contributor Author

eparish1 commented Apr 9, 2024

def GalerkinApproximateThenProjectHyperReducer(residual_snapshots : np.ndarray, test_basis: np.ndarray, target_sample_mesh_fraction: float , method='DEIM') -> np.ndarray, np.ndarray:
        '''
        For a function f approximated as Phi_f  pinv(P \Phi_f) P f, the purpose is to return 
        Psi^T Phi_f  pinv(P \Phi_f), where Psi^T is the "test" basis 
        along with the sample mesh indices

        Inputs: (n_vars, nx, n_snaps) np.ndarray, residual snapshots
                     float, sample mesh fraction
        Outputs: 
             (n_vars, ns , K) np.ndarray, hyper_reduced_test_basis, where ns is the number of sample mesh points 
             ns, np.ndarray: sample mesh points
        '''
        pass


def LspgApproximateThenProjectHyperReducer(residual_snapshots : np.ndarray, target_sample_mesh_fraction: float , method='DEIM') -> np.ndarray, np.ndarray:
        '''
        For a function f approximated as Phi_f  pinv(P \Phi_f) P f, the purpose is to return 
        Psi^T Phi_f  pinv(P \Phi_f), where Psi^T is the "test" basis 
        along with the sample mesh indices

        Inputs: (n_vars, nx, n_snaps) np.ndarray, residual snapshots
                     float, sample mesh fraction
        Outputs: 
             (n_vars, ns , K) np.ndarray, hyper_reduced_test_basis, where ns is the number of sample mesh points 
             ns, np.ndarray: sample mesh points
        '''
        pass

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants