-
-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fastai parity #259
Comments
Updated the list with pointers to existing implementations |
I vote for reviving MLMetrics instead of rolling them into FluxTraining. I'm sure there would be many base Flux, Knet and other library users who could make use of them. Currently everybody has a incomplete subset of metrics with incompatible APIs (e.g. Avalon.jl). I would hope that we could get rid of those for a "NNlib of metrics" instead. |
Since most of these metrics are really distances in one form or another wouldn't it be natural to use the existing implementation in Distances.jl ? But maybe I'm missing something. |
Yeah I think implementing the distance part of it in Distances.jl makes a lot of sense. |
I think there's still a need for a metrics package because most of the classification metrics don't make sense in Distances.jl. There's also the case of domain-specific metrics like |
there should be some distinction between losses and metrics? |
At the very least, the metrics package/namespace could reexport losses that are also metrics. |
Yeah I think the hierarchy would be Distances -> Metrics -> Losses. There will be losses that are not metrics (i.e. defined completely in a loss package), and losses that just reexport a metric. Similarly there will be metrics that are completely defined the metrics package, but many will reexport or build upon standard distances. To that end, I agree that we should make use of Distances.jl as much as possible. And if there is a metric that generalizes to a distance, then we can submit a PR to Distances.jl. |
I agree with @darsnack! Every loss can be a metric (not in a strict mathematical sense but as a measure of model performance), but not the reverse. |
Also, if MLMetrics.jl is revived, FluxTraining.jl should depend on it. |
Updated the list |
@lorenzoh do you want to transfer stuff from here to FastAI.jl issues? |
I went ahead and transferred the issue to the FastAI.jl repo which should make it easier to track 👍 |
This issue tracks the progress on fastai parity.
Last updated 2021/08/11
Datasets
datasetpath
)Data pipelines
BlockMethod
)Models
Training
training schedules
mixed precision training
AFAIK this is currently in the works by the Flux.jl team
distributed data parallel training
Lots of progress at DaggerFlux.jl. Needs to be integrated with FastAI.jl
callbacks
early stopping (
EarlyStopping
)checkpointing (
Checkpointer
)Works but could use some improvements such as conditional checkpointing
stopping on NaN loss (
StopOnNaNLoss
)metrics and history tracking (
Metrics
,Recorder
)logging modalities
logging backends
TensorBoardBackend
)gradient accumulation
metrics and loss functions
Many metrics and loss functions are still missing, see discussion below
Applications
The text was updated successfully, but these errors were encountered: