Replies: 1 comment 2 replies
-
AutoMLForecast performs cross validation for each trial and reports the average across folds as the trial score. These results are available in the |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am using AutoMLForecast to tune several models on several dataframes. I'm not sure about exactly what the fit function is doing and what the most computationally efficient way to train models and get proper accuracy metrics.
It would seem from the arguments to the fit function that cross-validation is used to train the final models, but the dataframe returned by the forecast_fitted_values function does not have a cutoff column and the description says the predictions are in-sample, as opposed to an out-sample cross-validation.
My solution to generate good metrics is to take each final model and run a full cross-validation function on it, then concatenate all of the outputs into one dataframe, and evaluate that dataframe. However this is very computationally expensive and takes a long time.
Is there a way to output the out of sample cross-validation used by the fit function, or even a way to just get metrics from evaluating such a dataframe without having to run a full additional cross-validation after training?
Beta Was this translation helpful? Give feedback.
All reactions