Skip to content

Commit

Permalink
Minor changes
Browse files Browse the repository at this point in the history
  • Loading branch information
antcc committed Sep 19, 2022
1 parent 5765c53 commit 65c7bec
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
2 changes: 1 addition & 1 deletion chapters/bayesian-functional-regression.tex
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ \chapter{Bayesian methodology for RKHS-based functional regression models}\label
\end{equation}
Moreover, as we said before, with a slight abuse of notation we will understand the expression \(\dotprod{x}{\alpha}_K\) as \(\Psi_x(\alpha)\), where \(x=X(\omega)\) and \(\Psi_x\) is Loève's isometry. Hence, taking into account that \(\Psi_X(K(t, \cdot)) = X(t)\) by definition (see~\eqref{eq:loeves-isometry}), when \(\alpha\) is as in~\eqref{eq:alpha_h0} we can write \(\dotprod{x}{\alpha}_K \equiv \sum_{j=1}^p \beta_j x(t_j)\).

In this way we get a simpler, finite-dimensional approximation of the functional RKHS model, which we argue reduces the overall complexity of the model while still capturing most of the relevant information. When it comes to parameter estimation, a direct optimization of some loss function would probably require a tailored algorithm that took into account the continuous nature of the times \(t_j\). Indeed, such an idea is explored in \citet{berrendero2018functional} for the logistic regression case, where the authors propose a ``greedy max-max'' method reminiscent of the EM algorithm that alternates between estimating the coefficients and the time instants through a maximum likelihood approach.
In this way we get a simpler, finite-dimensional approximation of the functional RKHS model, which we argue reduces the overall complexity of the model while still capturing most of the relevant information. When it comes to parameter estimation, a direct optimization of some loss function would probably require a tailored algorithm that took into account the whole functional trajectories \(x(t)\) to select the appropriate times \(t_j\). Indeed, such an idea is explored in \citet{berrendero2018functional} for the logistic regression case, where the authors propose a ``greedy max-max'' method reminiscent of the EM algorithm that alternates between estimating the coefficients and the time instants through a maximum likelihood approach.

At this point we propose to follow a Bayesian approach to estimate the parameters of the model, which we believe is in line with the idea of simplicity we pursue, and also introduces an additional layer of flexibility into the model. In this way, we can include problem-specific information through the use of prior distributions, and on top of that, this method works almost unaltered for both linear and logistic regression models. The general idea will be to impose a prior distribution on the functional parameter to eventually derive a posterior distribution after incorporating the available sample information.

Expand Down
4 changes: 2 additions & 2 deletions slides/defense.tex
Original file line number Diff line number Diff line change
Expand Up @@ -406,7 +406,7 @@ \section{A Bayesian approach to parameter estimation}
\begin{frame}{Label switching}
The model suffers from non-identifiability of the components caused by their interchangeability: \(\pi(Y|X,\theta)=\pi(Y|X, \nu(\theta))\) for any permutation \(\nu\) that rearranges the indices \(j=1,\dots, p\):
\[
(\beta_1, \beta_2, \tau_1, \tau_2, \alpha_0, \sigma^2) \leftrightsquigarrow (\beta_2, \beta_1, \tau_2, \tau_1, \alpha_0, \sigma^2).
(\beta_1, \beta_2, t_1, t_2, \alpha_0, \sigma^2) \leftrightsquigarrow (\beta_2, \beta_1, t_2, t_1, \alpha_0, \sigma^2).
\]

Hence, the components may be inadvertently exchanged from one iteration to the next in any MCMC algorithm.
Expand Down Expand Up @@ -568,8 +568,8 @@ \section{Conclusions}

\begin{exampleblock}{The road ahead}
\begin{itemize}
\item Experiment with different prior distributions.
\item Derive theoretical properties of the predictors.
\item Experiment with different prior distributions.
\item Try different MCMC algorithms (e.g. reversible-jump MCMC).
\end{itemize}
\end{exampleblock}
Expand Down

0 comments on commit 65c7bec

Please sign in to comment.