Skip to content

Bayesian GPLVM using a specific latent input prior #1100

@Soham6298

Description

@Soham6298

I have a question regarding implementing a specific setup using Bayesian GPLVM framework.

Setup:

I want to estimate latent X such that

Y = f(X) + error

where Y is NxD (multi-output) and X is latent. I have a prior on X such that X* ~ N(X, s). Assuming s = 0.1, I would like to use the GPLVM framework to recover posterior latent X. Since the setup is a part of a simulation study, I have true X to compute rmse for recovery.

To that end, I am using:

def gpyfit(output, input_prior):
    Q = 1 #input_dim
    m_gplvm = GPy.models.bayesian_gplvm_minibatch.BayesianGPLVMMiniBatch(output, Q, num_inducing = 12, kernel=GPy.kern.RBF(Q))
    m_gplvm.X.set_prior = np.random.normal(X, 0.1)
    m_gplvm.kern.lengthscale = scp.stats.halfnorm.rvs()
    m_gplvm.kern.variance = scp.stats.halfnorm.rvs()
    m_gplvm.likelihood.variance = scp.stats.halfnorm.rvs()
    m_gplvm.optimize(messages=1, max_iters=5e4)
    return(m_gplvm)

As it is apparent, I am setting custom priors for the covariance function hyperparameters as well as error variance.

When I am using this setup, the rmse is absurdly high, which makes me think that I am making a mistake somewhere. It will be helpful to know in case someone has already tried out a similar problem scenario, or if I am making an obvious mistake.

Thanks!

Metadata

Metadata

Assignees

Labels

need more infoIf an issue or PR needs further information by the issuer

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions