You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 21, 2022. It is now read-only.
An example for a unsupervised loss is the density level detection loss (DLD) L(x, t) = 1_{(-\infty, 0)} ((g(x) - p) sign t)), where t = f(x) and g is the unknown density. Obviously this loss cannot be computed because g is unknown, but that's what surrogate losses are for. Interestingly a lot of surrogate losses for DLD are supervised losses as you observed.
However, I think even then practically speaking the types would be different to implement, no? x is in general a matrix here, while in the supervised case y wouldn't be, or in the cases where y would be (multivariate regression) the matrix would be interpreted differently I think.
All in all I am not sure about this as I haven't spend a lot of time dwelling on unsupervised losses. I simply wanted to keep the option open
ahwillia
changed the title
Laundry List of
Laundry List of Losses and Penalties
Jul 1, 2016
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
In our other comically long conversation (JuliaML/META#8 (comment)), we came up with the following types:
Cost
is a functionf(X, y, c)
wherec
can be anythingLoss
is a functionf(X, y, g(X))
, soc
is some function of the dataSupervisedLoss
doesn't needX
, so its a functionf(y, g(X))
UnsupervisedLoss
doesn't needy
, so its a functionf(X, g(X))
Penalty
doesn't needX
ory
, so its a functionf(W)
whereW
are some model params/weightsDiscountedFutureRewards
... @tbreloff can you edit this andIMPORTANT: everybody should edit this comment to check off
Costs
as they are done and also to add newCosts
This post a work-in-progress...
Supervised Losses: (implemented in Losses.jl)
L1DistLoss
L2DistLoss
LPDistLoss
LogitDistLoss
,L(y, t) = -ln(4 * exp(y - t) / (1 + exp(y - t))²)
L(y, t) = log(1+exp(-t*y))
Unsupervised Losses:
Penalties: (implemented in ObjectiveFunctions perhaps?)
Constrants: These are actually penalties. The penalty is "infinitely bad" when the constraint is violated.
lower[i] < x[i] < upper[i]
for all i)Combinations of Penalties: I would like an easy/automatic way of combining instead of defining by hand, but this may not be feasible.
Discounted Future Rewards:
@tbreloff -- halp!
Some sources:
The text was updated successfully, but these errors were encountered: