Skip to content

Further explanation when Potentials have impact on posterior sampling #7744

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
williambdean opened this issue Mar 31, 2025 · 2 comments
Open

Comments

@williambdean
Copy link
Contributor

williambdean commented Mar 31, 2025

When does the use of Potential have an impact on posterior predictive sampling?

pymc/pymc/model/core.py

Lines 2274 to 2278 in 6ef135b

Warnings
--------
Potential terms only influence probability-based sampling, such as ``pm.sample``, but not forward sampling like
``pm.sample_prior_predictive`` or ``pm.sample_posterior_predictive``. A warning is raised when doing forward
sampling with models containing Potential terms.

It seems like sampling distributions only given InferenceData.posterior samples would not be affected by Potentials. Is it possible to refrain from warning given
the variables being sampled?

@williambdean williambdean changed the title Further explaination when Potentials have impact on sampling Further explanation when Potentials have impact on sampling Mar 31, 2025
@williambdean williambdean changed the title Further explanation when Potentials have impact on sampling Further explanation when Potentials have impact on posterior sampling Mar 31, 2025
@ricardoV94
Copy link
Member

ricardoV94 commented Mar 31, 2025

It seems like sampling distributions only given InferenceData.posterior samples would not be affected by Potentials.

What does this mean?

I guess Potentials can only affect what is in their graph, so we could do a fancier check. However, if you have a model with Potentials and know it's safe you can also suppress the warning on your end? We can define a warning subclass to make it easier to filter.

@williambdean
Copy link
Contributor Author

For instance, examples that use the potential to add a constraint to the parameters of the model. However, this is still a likelihood of the model

pymc/pymc/model/core.py

Lines 2344 to 2353 in af81955

.. code:: python
import pymc as pm
with pm.Model() as model:
# p(max_items) = 1 / max_items
max_items = pm.Uniform("max_items", lower=1, upper=100)
pm.Potential("power_prior", pm.math.log(1/max_items))
n_items = pm.Uniform("n_items", lower=1, upper=max_items, observed=60)

In comparison, the last example which uses the potential as the primary lielihood of the model

pymc/pymc/model/core.py

Lines 2359 to 2371 in af81955

.. code:: python
import pymc as pm
def normal_logp(value, mu, sigma):
return -0.5 * ((value - mu) / sigma) ** 2 - pm.math.log(sigma)
with pm.Model() as model:
mu = pm.Normal("x")
sigma = pm.HalfNormal("sigma")
data = [0.1, 0.5, 0.9]
llike = pm.Potential("llike", normal_logp(data, mu, sigma))

Is it correct to think the effect of sampling with posterior is different in these two cases? And the first one would not be affected if just looking at the likelihood variable?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants