You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 21, 2022. It is now read-only.
Sparse Coding is a framework of finding basis vectors and weights to decompose an input vector into the linear combination of a subset of basis vectors. The framework is very similar to Empirical Risk Minimization in that there's a loss and a penalty, except that the penalty is on the output, not the parameters. I believe sparse coding refers specifically to a linear model, but of course any combinations of transformation/loss/penalty could be used... we just need to add a penalty on the output. Should we just go ahead and add this to RegularizedObjective? It can default to NoPenalty.
Rethinking this a bit, I don't think this is correct:
The framework is very similar to Empirical Risk Minimization in that there's a loss and a penalty, except that the penalty is on the output, not the parameters
If you consider the input x as being approximated by the transformation y = w*a, then we want to ensure sparsity on a, not y. Couple this with the concept that both w and a are "parameters" to be learned, and you have something very different from ERM. I think it requires a separate type/implementation.
I think there are two separate problems to consider, the Dictionary Learning problem (learning w and a) and Sparse coding (just learning a with w fixed). The latter, is also known as sparse regression, which could be ERM? (I'm not familiar with ERM, but it sounds like a generalized regression).
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Sparse Coding is a framework of finding basis vectors and weights to decompose an input vector into the linear combination of a subset of basis vectors. The framework is very similar to Empirical Risk Minimization in that there's a loss and a penalty, except that the penalty is on the output, not the parameters. I believe sparse coding refers specifically to a linear model, but of course any combinations of transformation/loss/penalty could be used... we just need to add a penalty on the output. Should we just go ahead and add this to
RegularizedObjective
? It can default toNoPenalty
.ref:
http://ufldl.stanford.edu/wiki/index.php/Sparse_Coding
http://gabgoh.github.io/SARG/
cc: @gabgoh
The text was updated successfully, but these errors were encountered: