You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the FixedNoiseGP class, you should have an attribute covar_module that is an instance of a kernel (something like RBFKernel or ScaleKernel(RBFKernel) or similar, see example here https://docs.gpytorch.ai/en/stable/examples/01_Exact_GPs/Simple_GP_Regression.html . Then you can do fullmodel.covar_module(train_X).to_dense(). The to_dense() will evaluate the kernel and return a torch.Tensor object.
Assuming the model is in the evaluation mode, then predict_dist = model(test_x) yields the predictive distribution, which is a multivariate normal distribution. Then, the following yields the posterior covariance matrix
Let's say I have a GP with
n
training points. How to compute thenxn
covariance matrix on the training data with the posterior GP.I believe the covariance matrix is encoded as a lazy tensor and never actually evaluated. But I do need access to it for a specific application.
The text was updated successfully, but these errors were encountered: