-
Notifications
You must be signed in to change notification settings - Fork 250
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Apply model to new data? #605
Comments
I think you could do something like:
Essentially you forecast using the parameters from the bootstrap sample. You can use the same The only slight caveat has to do with the backcast value in the |
Got it, I think that makes sense. I will play around with your setup with my data, appreciate it! |
@bashtage I have one more question to make sure I'm understanding how this works, particularly how passing existing params works. I set up an example similar to my code above (see below for the new code). I would've thought that passing the original model's params to itself when creating a forecast would give the same results as not passing params, but I'm getting different forecasts for all periods > 1 step ahead. So I'm trying to figure out what I don't understand here.
What I get for h=1 is 0.880092, but for h=2, fcst1 is 1.163483, while fcst2 is 1.160607. |
The bootstrap method uses random numbers so you need to make sure you reset the seed of the numpy random singleton generator. These only matter for horizon 2 or larger. For horizon 1 the forecasts are always the same. |
Ahh that makes sense, tried that out and it works as expected now. Thanks! |
Hi @bashtage @msquaredds I think I am running into a similar issue, but with two minor exceptions. Would you be able to give me an insight on how to handle it? I would like to estimate a GARCH(1,1) model on training data and generate one-step ahead (out-of-sample) forecasts on a separate testing sample. One difference from the example above is that I am using an AR(1) for the mean model and think it is preferable to go for the analytical forecasts. Also, on the testing sample I would like to apply a rolling window scheme where I use Yt-1 to predict mean and variance for Yt over the whole sample. I came up with something along these lines, which should give me the first forecast in the test sample, which uses the
|
I was wondering if there's a way to create a model and then apply it to new data.
What I'm trying to do is: bootstrap data, create/fit model for a bootstrap, apply that model to original data. At first I thought maybe the new data could be passed in to "forecast" as it is in some other python packages, but I didn't see anything in the docs for that.
I did see that params can be passed in and so I'm wondering if that's the correct approach. How I was visualizing that is: bootstrap data, create/fit model, create/fit a new model and then forecast with that model but pass the original params. This seems a little convoluted since I'm not sure why I'd need to fit the new model conceptually, but since .forecast is a method on .fit and it seems necessary there.
I tried to build up to seeing if the params approach would work, but I'm running into an issue. I'll share the general idea/code here to see if that sheds light on anything obvious, but if not I can create a full example too. Basically I was creating/fitting a model and then forecasting with it (not on other data, just on the same data), which gave me a set of parameters and forecasts. I then re-attempted the forecast and set the params to be the same as the original model, but was getting different forecast values (everything else was kept the same).
Below I'm getting different values for self.fcst and test_fcst:
So my question is whether either of these approaches (or any other pre-existing approach) is correct and, if the params approach is correct, whether someone has insight on what I'm doing wrong.
The text was updated successfully, but these errors were encountered: