MCMC¶
MCMC¶

class
MCMC
(kernel, num_samples, warmup_steps=0)[source]¶ Bases:
pyro.infer.abstract_infer.TracePosterior
Wrapper class for Markov Chain Monte Carlo algorithms. Specific MCMC algorithms are TraceKernel instances and need to be supplied as a
kernel
argument to the constructor.Parameters:  kernel – An instance of the
TraceKernel
class, which when given an execution trace returns another sample trace from the target (posterior) distribution.  num_samples (int) – The number of samples that need to be generated, excluding the samples discarded during the warmup phase.
 warmup_steps (int) – Number of warmup iterations. The samples generated during the warmup phase are discarded.
 kernel – An instance of the
HMC¶

class
HMC
(model, step_size=None, trajectory_length=None, num_steps=None, adapt_step_size=False, transforms=None)[source]¶ Bases:
pyro.infer.mcmc.trace_kernel.TraceKernel
Simple Hamiltonian Monte Carlo kernel, where
step_size
andnum_steps
need to be explicitly specified by the user.References
[1] MCMC Using Hamiltonian Dynamics, Radford M. Neal
Parameters:  model – Python callable containing pyro primitives.
 step_size (float) – Determines the size of a single step taken by the verlet integrator while computing the trajectory using Hamiltonian dynamics. If not specified, it will be set to 1.
 trajectory_length (float) – Length of a MCMC trajectory. If not
specified, it will be set to
step_size x num_steps
. In casenum_steps
is not specified, it will be set to \(2\pi\).  num_steps (int) – The number of discrete steps over which to simulate
Hamiltonian dynamics. The state at the end of the trajectory is
returned as the proposal. This value is always equal to
int(trajectory_length / step_size)
.  adapt_step_size (bool) – A flag to decide if we want to adapt step_size during warmup phase using Dual Averaging scheme.
 transforms (dict) – Optional dictionary that specifies a transform
for a sample site with constrained support to unconstrained space. The
transform should be invertible, and implement log_abs_det_jacobian.
If not specified and the model has sites with constrained support,
automatic transformations will be applied, as specified in
torch.distributions.constraint_registry
.
Example:
true_coefs = torch.tensor([1., 2., 3.]) data = torch.randn(2000, 3) dim = 3 labels = dist.Bernoulli(logits=(true_coefs * data).sum(1)).sample() def model(data): coefs_mean = torch.zeros(dim) coefs = pyro.sample('beta', dist.Normal(coefs_mean, torch.ones(3))) y = pyro.sample('y', dist.Bernoulli(logits=(coefs * data).sum(1)), obs=labels) return y hmc_kernel = HMC(model, step_size=0.0855, num_steps=4) mcmc_run = MCMC(hmc_kernel, num_samples=500, warmup_steps=100).run(data) posterior = EmpiricalMarginal(mcmc_run, 'beta') print(posterior.mean)

diagnostics
()[source]¶ Relevant diagnostics (optional) to be printed at regular intervals of the MCMC run. Returns None by default.
Returns: String containing the diagnostic summary. e.g. acceptance rate Return type: string

initial_trace
()[source]¶ Returns an initial trace from the prior to initiate the MCMC run.
Returns: Trace instance.
NUTS¶

class
NUTS
(model, step_size=None, adapt_step_size=False, transforms=None)[source]¶ Bases:
pyro.infer.mcmc.hmc.HMC
NoUTurn Sampler kernel, which provides an efficient and convenient way to run Hamiltonian Monte Carlo. The number of steps taken by the integrator is dynamically adjusted on each call to
sample
to ensure an optimal length for the Hamiltonian trajectory [1]. As such, the samples generated will typically have lower autocorrelation than those generated by theHMC
kernel. Optionally, the NUTS kernel also provides the ability to adapt step size during the warmup phase.Refer to the baseball example to see how to do Bayesian inference in Pyro using NUTS.
References
[1] The NoUturn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo, Matthew D. Hoffman, and Andrew Gelman
Parameters:  model – Python callable containing pyro primitives.
 step_size (float) – Determines the size of a single step taken by the verlet integrator while computing the trajectory using Hamiltonian dynamics. If not specified, it will be set to 1.
 adapt_step_size (bool) – A flag to decide if we want to adapt step_size during warmup phase using Dual Averaging scheme.
 transforms (dict) – Optional dictionary that specifies a transform
for a sample site with constrained support to unconstrained space. The
transform should be invertible, and implement log_abs_det_jacobian.
If not specified and the model has sites with constrained support,
automatic transformations will be applied, as specified in
torch.distributions.constraint_registry
.
Example:
true_coefs = torch.tensor([1., 2., 3.]) data = torch.randn(2000, 3) dim = 3 labels = dist.Bernoulli(logits=(true_coefs * data).sum(1)).sample() def model(data): coefs_mean = torch.zeros(dim) coefs = pyro.sample('beta', dist.Normal(coefs_mean, torch.ones(3))) y = pyro.sample('y', dist.Bernoulli(logits=(coefs * data).sum(1)), obs=labels) return y nuts_kernel = NUTS(model, adapt_step_size=True) mcmc_run = MCMC(nuts_kernel, num_samples=500, warmup_steps=300).run(data) posterior = EmpiricalMarginal(mcmc_run, 'beta') print(posterior.mean)