|
1 | 1 | \documentclass{article} |
2 | 2 |
|
| 3 | + |
| 4 | +\usepackage{booktabs} |
| 5 | +\usepackage{float} |
| 6 | +\usepackage{graphicx} |
| 7 | +\usepackage[utf8]{inputenc} |
| 8 | +\usepackage{caption} |
| 9 | +\usepackage{subcaption} |
| 10 | +\usepackage[a4paper, margin=0.5in]{geometry} |
| 11 | +\usepackage{titlesec} |
| 12 | +\usepackage{setspace} |
| 13 | +\usepackage{ragged2e} |
| 14 | +\usepackage{fancyhdr} |
| 15 | + |
| 16 | + |
| 17 | + |
| 18 | + |
| 19 | + |
| 20 | +\titleformat{\section}{\Large\bfseries}{\thesection}{1em}{} |
| 21 | +\titleformat{\subsection}{\large\bfseries}{\thesubsection}{1em}{} |
| 22 | + |
| 23 | + |
| 24 | +\captionsetup{font=small, labelfont=bf} |
| 25 | + |
| 26 | + |
| 27 | +\justifying |
| 28 | +\setlength{\parindent}{15pt} |
| 29 | +\raggedbottom |
| 30 | + |
| 31 | + |
| 32 | +\pagestyle{fancy} |
| 33 | +\fancyhf{} |
| 34 | +\fancyfoot[C]{\thepage} |
| 35 | +\renewcommand{\headrulewidth}{0pt} |
| 36 | + |
| 37 | + |
3 | 38 | \begin{document} |
4 | 39 |
|
5 | | -Hello, World! |
| 40 | +\title{Apple Stock AR-Process Analysis and Multistep Forecasting} |
| 41 | +\author{Lennart Epp} |
| 42 | +\date{\today} |
| 43 | + |
| 44 | +\maketitle |
| 45 | + |
| 46 | + |
| 47 | +\pagenumbering{gobble} |
| 48 | + |
| 49 | + |
| 50 | +\tableofcontents |
| 51 | +\listoffigures |
| 52 | +\listoftables |
| 53 | + |
| 54 | +\newpage |
| 55 | +\pagenumbering{arabic} |
| 56 | + |
| 57 | +\section{Introduction} |
| 58 | + |
| 59 | +In this project, Apple stock data is analyzed using time series econometrics methods. This |
| 60 | +paper is structured as follows: first, I present the Top 3 best-fitting AR(p) models for approximating |
| 61 | +i.e for one step forecasting of the Apple stock data. So for this forecast/approximation, only values |
| 62 | +of the original data namley the close price of the apple stock are used as in input for the forecast.\\ |
| 63 | +Then, I proceed with an analysis to what extent it is possible to fit an AR(p) process on Apple. |
| 64 | +For this I checked if the differenced Apple data is stationary and after that I concern whether the |
| 65 | +time series has long or short memory.\\ |
| 66 | +Lastly, I will present a multi-step forecast, where each forecasted value is used as input |
| 67 | +for the next prediction. I will analyze why this approach fails to capture Apple stock dynamics |
| 68 | +beyond a one-step forecast and discuss the overall feasibility of fitting an AR model to Apple stock data. |
| 69 | + |
| 70 | +\section{Top AR(p) Approximations} |
| 71 | + |
| 72 | +The following plots shows the top 3 AR model fits in the sense of the Akaike Information |
| 73 | +Criterion. It also shows the residual plot of the top 3 AR models. The figure shows that |
| 74 | +one-step forecasts closely follow the original data. However, the variance of the residuals |
| 75 | +increases over time, suggesting that the model struggles to maintainforecast accuracy over |
| 76 | +longer periods. |
| 77 | + |
| 78 | +\begin{figure}[H] |
| 79 | + \centering |
| 80 | + \includegraphics[scale=1.8, width=\textwidth, trim=10 10 10 10, clip] |
| 81 | + {../bld/plots/top_ar_models_plot.pdf} |
| 82 | + \caption{Comparison of the top-performing AR models.} |
| 83 | + \label{fig:top_ar_models} |
| 84 | +\end{figure} |
| 85 | + |
| 86 | +\noindent In the following table, you see the metrics of the best AR(P) processes in terms of their AIC. |
| 87 | +First, notice that the AR(p) was fitted on the differenced close price since the P-Value of |
| 88 | +the Augmented Dickey-Fuller test suggested differencing, indicating that the original close |
| 89 | +price is likely not stationary.\\ |
| 90 | +Therefore, the AR coefficients had to be integrated to approximate the original time series, |
| 91 | +which could lead to accumulated errors. So given the AIC as you can see in the table, the AR(1) process fitted |
| 92 | +Apple best. In total I tested p values up to 12. |
| 93 | + |
| 94 | +\input{../bld/models/top_models.tex} |
| 95 | + |
| 96 | +\section{Memory Analysis} |
| 97 | + |
| 98 | +In this section, I check to what extent it is possible to fit an AR model on the differenced data. |
| 99 | +Therefore, I first used the ADF test to check if the differenced close price is stationary. |
| 100 | +The results are shown in the following table, indicating that the differenced series is likely |
| 101 | +stationary. This is a necessary prerequest for fitting AR models. |
| 102 | + |
| 103 | +\input{../bld/memory/diff_close_stat_test.tex} |
| 104 | + |
| 105 | + |
| 106 | +\noindent After confirming stationarity, the next critical question is whether the differenced time series |
| 107 | +exhibits short or long memory. Therefore I computed the Autocorrelation Function of the time series. As |
| 108 | +you can see in the following plot the ACF decreases over time but still has a few outlieres which |
| 109 | +indicates that the time series has the characteristics of a process with a short memory, |
| 110 | +but with potential components with a long memory. |
| 111 | + |
| 112 | +\begin{figure}[H] |
| 113 | + \centering |
| 114 | + \includegraphics[scale=1.2, width=\textwidth, trim=10 10 10 10, clip] |
| 115 | + {../bld/plots/acf_plot.pdf} |
| 116 | + \caption{Autocorrelation Function (ACF) of the differenced time series with 95\% confidence |
| 117 | + bands.} |
| 118 | + \label{fig:acf_plot} |
| 119 | +\end{figure} |
| 120 | + |
| 121 | + |
| 122 | +\noindent Since the differenced close price is likely stationary, but the ACF indictaes some long run effects |
| 123 | +I also cumpted the hurst coefficient which has an value of approximately .052 which indicates |
| 124 | +that the time series shows characteristics of a process with almost random behavior, as the Hurst coefficient |
| 125 | +close to 0.5 indicates the absence of strong long-term dependencies. However, since the ACF indicates some |
| 126 | +long-term effects, this result could indicate a mixture of short-term autocorrelations with occasional |
| 127 | +persistence. |
| 128 | + |
| 129 | + |
| 130 | +\input{../bld/memory/hurst_exponent.tex} |
| 131 | + |
| 132 | +\newpage |
| 133 | + |
| 134 | +\section{Multistep Forecast} |
| 135 | + |
| 136 | +Although the differenced close price was found to be stationary, the presence of a Hurst |
| 137 | +coefficient of 0.52 and some significant autocorrelation function (ACF) values suggest that |
| 138 | +the series retains some degree of long memory. |
| 139 | + |
| 140 | +\noindent AR models are designed to capture short-term dependencies and assume that the impact of past |
| 141 | +values decays rapidly. However, in a long-memory process, dependencies persist for a longer |
| 142 | +time, meaning that an AR(p) model may fail to account for the full structure of the series |
| 143 | +beyond a few steps ahead. |
| 144 | + |
| 145 | +\noindent In a one-step-ahead forecast, the AR model predicts the next value based solely on observed |
| 146 | +historical data. In a multi-step forecast, each predicted value is used as input for the next |
| 147 | +prediction. This recursive approach leads to error accumulation. |
| 148 | + |
| 149 | +\noindent The following figure illustrates that the AR model fails to capture the long-term structure |
| 150 | +of Apple stock price movements when applied to multi-step forecasting. This is due to error accumulation |
| 151 | +and the model's inability to account for evolving market dynamics. |
| 152 | + |
| 153 | + |
| 154 | +\begin{figure}[H] |
| 155 | + \centering |
| 156 | + \includegraphics[scale=1.8, width=\textwidth, trim=10 10 10 10, clip] |
| 157 | + {../bld/plots/multistep_forecast.pdf} |
| 158 | + \caption{Multi-step forecast for Apple stock price.} |
| 159 | + \label{fig:apple_forecast} |
| 160 | +\end{figure} |
| 161 | + |
| 162 | +\section{Conclusion} |
| 163 | + |
| 164 | +To conclude my analysis i went through the following steps:\\ |
| 165 | +First, the stationarity analysis, conducted using the Augmented Dickey-Fuller (ADF) test, |
| 166 | +indicated that the original close price series was non-stationary, requiring differencing to |
| 167 | +achieve stationarity.\\ |
| 168 | +Second, the evaluation of different AR(p) models based on the Akaike Information Criterion |
| 169 | +(AIC) revealed that an AR(1) model provided the best fit among the examined options.\\ |
| 170 | +Third, the study investigated the memory characteristics of the time series by computing the |
| 171 | +Hurst exponent and analyzing the autocorrelation function (ACF). The results suggested that |
| 172 | +the differenced time series exhibited short-memory behavior.\\ |
| 173 | +Finally, the limitations of AR models for multi-step forecasting were assessed. While the AR(1) |
| 174 | +model performs well for short-term forecasts, its accuracy deteriorates over multiple steps |
| 175 | +due to error propagation and the inability to capture long-term dependencies. |
6 | 176 |
|
7 | 177 | \end{document} |
0 commit comments