3737
3838\begin {document }
3939
40- \title {Apple Stock AR-Process Analysis and Multistep Forecasting }
41- \author {Lennart Epp }
40+ \title {Autoregressive Model Analysis and Multistep Forecast of Apple Stock Data }
41+ \author {Lennart Lülsdorf }
4242\date {\today }
4343
4444\maketitle
5757\section {Introduction }
5858
5959In this project, Apple stock data is analyzed using time series econometrics methods. This
60- paper is structured as follows: first, I present the Top 3 best-fitting AR(p) models for approximating
61- i.e for one step forecasting of the Apple stock data. So for this forecast/approximation, only values
62- of the original data namley the close price of the apple stock are used as in input for the forecast.\\
63- Then, I proceed with an analysis to what extent it is possible to fit an AR(p) process on Apple.
64- For this I checked if the differenced Apple data is stationary and after that I concern whether the
65- time series has long or short memory.\\
66- Lastly, I will present a multi-step forecast, where each forecasted value is used as input
67- for the next prediction. I will analyze why this approach fails to capture Apple stock dynamics
68- beyond a one-step forecast and discuss the overall feasibility of fitting an AR model to Apple stock data.
60+ project is structured as follows: first, the three best autoregressive AR(p) models for approximating
61+ i.e. for one step forecasting of the Apple stock data are presented. Therefore, for this approximation, only values
62+ of the original data, namley the close price of the apple stock, are used as an input for the forecast.\\
63+ This was followed by an analysis, to what extent it is possible to fit an AR(p) process on Apple.
64+ To do this, the stationarity of the differenced Apple data was examined and also whether the time series has long or short memory.\\
65+ Finally, a multi-step forecast is presented, where each forecasted value is used as an input
66+ for the next prediction. The failure of this approach to capture Apple stock dynamics beyond a one-step forecast
67+ will be analyzed and a discussion of the overall feasibility to fit an AR model to Apple stock data completes the project.
6968
7069\section {Top AR(p) Approximations }
7170
72- The following plots shows the top 3 AR model fits in the sense of the Akaike Information
73- Criterion. It also shows the residual plot of the top 3 AR models. The figure shows that
74- one-step forecasts closely follow the original data. However, the variance of the residuals
75- increases over time, suggesting that the model struggles to maintainforecast accuracy over
71+ In the first graph of Figure 1 the three best AR model fits in the sense of the Akaike Information
72+ Criterion (AIC) are shown. The second graph visualizes the residual plot of these AR models. Overall, the figure displays
73+ that one-step forecasts closely follow the original data. However, the variance of the residuals
74+ increases over time, suggesting that the model struggles to maintain forecast accuracy over
7675longer periods.
7776
7877\begin {figure }[H]
7978 \centering
8079 \includegraphics [scale=1.8, width=\textwidth , trim=10 10 10 10, clip]
8180 {../bld/plots/top_ar_models_plot.pdf}
82- \caption {Comparison of the top-performing AR models. }
81+ \caption {Comparison of the top-performing AR models}
8382 \label {fig:top_ar_models }
8483\end {figure }
8584
86- \noindent In the following table, you see the metrics of the best AR(P ) processes in terms of their AIC.
87- First, notice that the AR(p) was fitted on the differenced close price since the P-Value of
88- the Augmented Dickey-Fuller test suggested differencing, indicating that the original close
89- price is likely not stationary .\\
90- Therefore, the AR coefficients had to be integrated to approximate the original time series,
91- which could lead to accumulated errors. So given the AIC as you can see in the table, the AR(1) process fitted
92- Apple best. In total I tested p values up to 12.
85+ \noindent Table 1 contains the metrics of the best AR(p ) processes in terms of their AIC.
86+ Notice that the AR(p) model was fitted on the differenced time series of the close price from Apple stock data,
87+ since the p-value of the Augmented Dickey-Fuller (ADF) test showed that the original close price time series
88+ is nonstationary .\\
89+ Therefore, the AR coefficients had to be integrated, to approximate the original time series,
90+ which increases the probability of accumulated errors. So, given the AIC the AR(1) process fitted
91+ Apple best (Table 1) . In total, values for p from 1 to 12 were tested .
9392
9493\input {../bld/models/top_models.tex }
9594
9695\section {Memory Analysis }
9796
98- In this section, I check to what extent it is possible to fit an AR model on the differenced data.
99- Therefore, I first used the ADF test to check if the differenced close price is stationary.
100- The results are shown in the following table , indicating that the differenced series is likely
101- stationary. This is a necessary prerequest for fitting AR models.
97+ This section addresses the question to what extent it is possible to fit an AR model on the differenced data.
98+ Therefore, the ADF test was used to check if the differenced close price was stationary.
99+ The results are shown in Table 2 , indicating that the differenced series is likely
100+ stationary, which is a necessary prerequest for fitting AR models.
102101
103102\input {../bld/memory/diff_close_stat_test.tex }
104103
105104
106105\noindent After confirming stationarity, the next critical question is whether the differenced time series
107- exhibits short or long memory. Therefore I computed the Autocorrelation Function of the time series. As
108- you can see in the following plot the ACF decreases over time but still has a few outlieres which
109- indicates that the time series has the characteristics of a process with a short memory,
110- but with potential components with a long memory.
106+ exhibits short or long memory. Therefore, the Autocorrelation Function (ACF) of the time series was computed. Visualized
107+ in Figure 2, the ACF decreases over time, which indicates the characteristics of a process with short memory.
108+ However, a few outlieres visualize a potential of a time series with long memory.
109+
111110
112111\begin {figure }[H]
113112 \centering
114113 \includegraphics [scale=1.2, width=\textwidth , trim=10 10 10 10, clip]
115114 {../bld/plots/acf_plot.pdf}
116115 \caption {Autocorrelation Function (ACF) of the differenced time series with 95\% confidence
117- bands. }
116+ bands}
118117 \label {fig:acf_plot }
119118\end {figure }
120119
121120
122- \noindent Since the differenced close price is likely stationary, but the ACF indictaes some long run effects
123- I also cumpted the hurst coefficient which has an value of approximately .052 which indicates
124- that the time series shows characteristics of a process with almost random behavior, as the Hurst coefficient
125- close to 0.5 indicates the absence of strong long-term dependencies . However, since the ACF indicates some
126- long-term effects, this result could indicate a mixture of short-term autocorrelations with occasional
121+ \noindent Since the differenced close price is likely stationary, but the ACF indictaes some long run effects,
122+ the Hurst exponent was computed (Table 3). This exponent indicates the absence of strong long-term dependencies
123+ with a value close to 0.5. For the Apple stock data a value of approximately 0.52 was computed. This demonstrates
124+ that the time series shows characteristics of a process with almost random behavior . However, since the ACF indicates some
125+ long-term effects (Figure 2) , this result might be an indication for a mixture of short-term autocorrelations with occasional
127126persistence.
128127
129128
@@ -134,44 +133,44 @@ \section{Memory Analysis}
134133\section {Multistep Forecast }
135134
136135Although the differenced close price was found to be stationary, the presence of a Hurst
137- coefficient of 0.52 and some significant autocorrelation function ( ACF) values suggest that
136+ coefficient of 0.52 and some significant ACF values suggest that
138137the series retains some degree of long memory.
139138
140139\noindent AR models are designed to capture short-term dependencies and assume that the impact of past
141140values decays rapidly. However, in a long-memory process, dependencies persist for a longer
142141time, meaning that an AR(p) model may fail to account for the full structure of the series
143142beyond a few steps ahead.
144143
145- \noindent In a one-step-ahead forecast, the AR model predicts the next value based solely on observed
146- historical data. In a multi-step forecast, each predicted value is used as input for the next
144+ \noindent In an one-step-ahead forecast, the AR model predicts the next value based solely on observed
145+ historical data. While in a multi-step forecast, each predicted value is used as an input for the next
147146prediction. This recursive approach leads to error accumulation.
148147
149- \noindent The following figure illustrates that the AR model fails to capture the long-term structure
148+ \noindent Figure 3 illustrates that the AR model fails to capture the long-term structure
150149of Apple stock price movements when applied to multi-step forecasting. This is due to error accumulation
151- and the model's inability to account for evolving market dynamics.
150+ and the model's incapacity to account for evolving market dynamics.
152151
153152
154153\begin {figure }[H]
155154 \centering
156155 \includegraphics [scale=1.8, width=\textwidth , trim=10 10 10 10, clip]
157156 {../bld/plots/multistep_forecast.pdf}
158- \caption {Multi-step forecast for Apple stock price. }
157+ \caption {Multi-step forecast for Apple stock price}
159158 \label {fig:apple_forecast }
160159\end {figure }
161160
162161\section {Conclusion }
163162
164- To conclude my analysis i went through the following steps:\\
163+ The autoregressive model analysis of Apple stock data can be summarized in three steps:
165164First, the stationarity analysis, conducted using the Augmented Dickey-Fuller (ADF) test,
166- indicated that the original close price series was non-stationary, requiring differencing to
167- achieve stationarity.\\
168- Second, the evaluation of different AR(p) models based on the Akaike Information Criterion
169- (AIC) revealed that an AR(1) model provided the best fit among the examined options.\\
165+ indicated that the original close price series was nonstationary, requiring differencing to
166+ achieve stationarity. Second, the evaluation of different AR(p) models based on the
167+ AIC revealed that an AR(1) model provided the best fit among the examined options.
170168Third, the study investigated the memory characteristics of the time series by computing the
171169Hurst exponent and analyzing the autocorrelation function (ACF). The results suggested that
172- the differenced time series exhibited short-memory behavior.\\
173- Finally, the limitations of AR models for multi-step forecasting were assessed. While the AR(1)
174- model performs well for short-term forecasts, its accuracy deteriorates over multiple steps
170+ the differenced time series exhibited a mixture of short-term autocorrelations with occasional
171+ persistence.\\
172+ Finally, the limitations of AR models for multi-step forecasting were assessed. While the AR
173+ models performs well for short-term forecasts, its accuracy deteriorates over multiple steps
175174due to error propagation and the inability to capture long-term dependencies.
176175
177176\end {document }
0 commit comments