-
Notifications
You must be signed in to change notification settings - Fork 2
Open
1 / 21 of 2 issues completedDescription
Base learners
- MLP 2 layer workflows are showing as untrained
- Add dropout (0.1 - 0.3), particularly as we increase the hidden unit size
- Fix penalty to 1e-6
- Warning in
finetune
says that racing is being selected withrmse
but we are usingmae
to select the best model. We should try to get racing to usemae
- Are the epochs enough or too many? I can't figure out how to see the
mlp
training results by epoch. If we can see a few of them, we could get an idea of whether we are on point or need to increase/decrease
Revisiting PCA
- We need to add the PCA recipe to the output as per PCA covariate reduction for prediction grid #420 at some point. As you said there, once we get the base and meta-learners to a good state.
July 7, 2025
-
cancel
->scancel
- Internal
parsnip::fit
infit_base_learner
- Will need re-dispatch so "tuned" workflows can be applied to
- @kyle-messier Explore targets unit tests with
targets::tar_assert_*
- Run through
fit_meta_learner
to see performance with better performing base learners and State/Ecoregion dummy variables - Calculate covariates at t-1 for covariate grid
- Calculate spatial neighbor values (TBD)
Model updates to bring in-line with literature
- Run base learners into
parsnip::gen_additive_mod
--> Get single output (i.e. no ensemble or probalistic) - Calculate all of the geographic covariates on regular spatial and temporal grid
- Calculate t-1 and nearest-neighbor "covariates" from the GAM prediction
- Run all of the base-learners with original + autocorrelated predictions
- Decide on whether we do a probabilistic meta-learner (brms) or a monte carlo ensemble
CV sets
- Run parallel (w.r.t. pipeline, not cpus) base-learner/meta-learner pipeline with leave-one-location-out CV
Sub-issues
Metadata
Metadata
Assignees
Labels
No labels