This repository was archived by the owner on Dec 18, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 51
This repository was archived by the owner on Dec 18, 2023. It is now read-only.
GlobalNoUTurnSampler slow with adapt_step_size=True #1593
Copy link
Copy link
Open
Description
Issue Description
Hi, Inference is really slow ~1-2seconds/iteration with GlobalNoUTurnSampler
with adapt_step_size=True
. If I turn it off, then inference is much faster. I don't see similar issue with HMC. Is this a known issue or is there some option that I am missing?
import os
import sys
import beanmachine.ppl as bm
import beanmachine.ppl.inference.monte_carlo_samples
import matplotlib.pyplot as plt
import pandas as pd
import torch
import torch.distributions as dist
from torch import tensor
import arviz as az
data = dict()
data['x'] =torch.tensor([0.0970,2.1020,0.5840,1.0394,3.4375,1.3102,1.2863,4.5382,1.2539,2.9319,4.7777,4.5937,4.0403,0.7749,1.8342,1.5008,3.9557,1.6095,0.5602,4.7997,1.6436,4.5236,1.9404,1.1508,3.1447,3.3551,0.2669,3.2483,2.5293,0.1596,2.8992,2.5016,4.7589,2.5648,1.9871,0.3201,4.4863,0.5675,0.8897,0.5689,0.5621,0.5701,0.7998,0.3166,4.8490,2.7849,2.6858,4.2393,1.4939,1.7230,0.4202,0.3591,2.5912,3.9753,3.5417,0.3198,2.7192,1.9892,1.9525,4.7293,1.7719,3.7889,2.3613,4.6994,2.0155,3.8840,0.8289,2.1164,1.3342,1.4850,0.5512,0.3919,2.4001,3.1949,4.9759,1.4158,0.6088,2.1599,0.3643,0.6800,2.4031,1.7706,3.7669,4.1080,4.9707,1.9139,4.7615,3.3075,3.6872,4.0946,0.3447,2.7292,0.2768,3.7641,3.3987,0.1833,4.9353,1.9730,2.2034,2.8598])
data['y'] =torch.tensor([4.9404,45.0399,14.6792,23.7890,71.7504,29.2036,28.7251,93.7642,28.0774,61.6389,98.5535,94.8738,83.8054,18.4976,39.6836,33.0162,82.1149,35.1902,14.2033,98.9948,35.8724,93.4728,41.8081,26.0160,65.8950,70.1028,8.3385,67.9659,53.5864,6.1922,60.9839,53.0324,98.1783,54.2960,42.7421,9.4019,92.7252,14.3490,20.7931,14.3781,14.2424,14.4019,18.9951,9.3322,99.9797,58.6983,56.7166,87.7854,32.8781,37.4594,11.4034,10.1829,54.8241,82.5061,73.8340,9.3964,57.3845,42.7835,42.0501,97.5864,38.4374,78.7779,50.2264,96.9873,43.3109,80.6796,19.5773,45.3275,29.6839,32.6994,14.0248,10.8371,51.0026,66.8986,102.5188,31.3158,15.1756,46.1974,10.2855,16.6008,51.0629,38.4128,78.3385,85.1597,102.4138,41.2770,98.2303,69.1497,76.7434,84.8919,9.8933,57.5845,8.5367,78.2816,70.9734,6.6665,101.7066,42.4600,47.0682,60.1959])
@bm.random_variable
def w():
return dist.Normal(10,1)
@bm.random_variable
def b():
return dist.Normal(1,1)
@bm.random_variable
def sigma():
return dist.Gamma(1*torch.ones([100]),2*torch.ones([100]))
@bm.random_variable
def y(x):
return dist.Normal(w()*x+b(),sigma())
queries = [w(),b(),sigma()]
observations = {y(data['x']):data['y']}
inferencer = bm.GlobalNoUTurnSampler(max_tree_depth=10,max_delta_energy=1000,initial_step_size=1.0,adapt_step_size=True,adapt_mass_matrix=True,multinomial_sampling=True,target_accept_prob=0.8)
samples = inferencer.infer(
queries=queries,
observations=observations,
num_samples=1000,
num_chains=4,
num_adaptive_samples=100
)
Expected Behavior
Inference should be faster
System Info
Please provide information about your setup
- PyTorch Version: 1.12
- Python version: 3.7
- Ubuntu: 20.04
- Bean Machine: latest (0.1.2)
Metadata
Metadata
Assignees
Labels
No labels