Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

evaluation #53

Open
who-m4n opened this issue Feb 14, 2023 · 0 comments
Open

evaluation #53

who-m4n opened this issue Feb 14, 2023 · 0 comments

Comments

@who-m4n
Copy link

who-m4n commented Feb 14, 2023

Hi,
I’ve got a little bit confused with the metrics and the evaluate_iterative_forecast function you’ve defined in score.py. Considering that the output of mean(xr.ALL_DIMS) is the average over all the dimensions, in the evaluate_iterative_forecast function, at first, you extract the values for a step according to the steps in the lead_time dimension; then, you shift the time dimension 1 lead_time step (why?); and compute the metric over all the dimensions including longitude, latitude, and time. This means that you compute the mean value for the time dimension as well. So, in that case, you accumulate the error for all time steps in Figure 2 in the paper. Am I right?
Moreover, could you please explain why the RMSE metric for the climatology and weekly climatology methods in Figure 2 are constant over time?
Furthermore, could you please explain what is N_forecast in the RMSE formula written in the paper?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant