Transformed Distribution¶
TransformedDistribution¶

class
TransformedDistribution
(base_distribution, bijectors, *args, **kwargs)[source]¶ Bases:
pyro.distributions.distribution.Distribution
Transforms the base distribution by applying a sequence of Bijector`s to it. This results in a scorable distribution (i.e. it has a `log_pdf() method).
Parameters:  base_distribution (pyro.distribution.Distribution) – a (continuous) base distribution; samples from this distribution are passed through the sequence of Bijector`s to yield a sample from the `TransformedDistribution
 bijectors – either a single Bijector or a sequence of Bijectors wrapped in a nn.ModuleList
Returns: the transformed distribution

batch_shape
(x=None, *args, **kwargs)[source]¶ Ref:
pyro.distributions.distribution.Distribution.batch_shape()

event_shape
(*args, **kwargs)[source]¶ Ref:
pyro.distributions.distribution.Distribution.event_shape()

log_pdf
(y, *args, **kwargs)[source]¶ Parameters: y (torch.autograd.Variable) – a value sampled from the transformed distribution Returns: the score (the log pdf) of y Return type: torch.autograd.Variable Scores the sample by inverting the bijector(s) and computing the score using the score of the base distribution and the log det jacobian
Bijector¶
InverseAutoRegressiveFlow¶

class
InverseAutoregressiveFlow
(input_dim, hidden_dim, sigmoid_bias=2.0, permutation=None)[source]¶ Bases:
pyro.distributions.transformed_distribution.Bijector
An implementation of an Inverse Autoregressive Flow. Together with the TransformedDistribution this provides a way to create richer variational approximations.
Example usage:
>>> base_dist = Normal(...) >>> iaf = InverseAutoregressiveFlow(...) >>> pyro.module("my_iaf", iaf) >>> iaf_dist = TransformedDistribution(base_dist, iaf)
Note that this implementation is only meant to be used in settings where the inverse of the Bijector is never explicitly computed (rather the result is cached from the forward call). In the context of variational inference, this means that the InverseAutoregressiveFlow should only be used in the guide, i.e. in the variational distribution. In other contexts the inverse could in principle be computed but this would be a (potentially) costly computation that scales with the dimension of the input (and in any case support for this is not included in this implementation).
Parameters:  input_dim (int) – dimension of input
 hidden_dim (int) – hidden dimension (number of hidden units)
 sigmoid_bias (float) – bias on the hidden units fed into the sigmoid; default=`2.0`
 permutation (bool) – whether the order of the inputs should be permuted (by default the conditional dependence structure of the autoregression follows the sequential order)
References:
1. Improving Variational Inference with Inverse Autoregressive Flow [arXiv:1606.04934] Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, Max Welling
2. Variational Inference with Normalizing Flows [arXiv:1505.05770] Danilo Jimenez Rezende, Shakir Mohamed
3. MADE: Masked Autoencoder for Distribution Estimation [arXiv:1502.03509] Mathieu Germain, Karol Gregor, Iain Murray, Hugo Larochelle

get_arn
()[source]¶ Return type: pyro.nn.AutoRegressiveNN Return the AutoRegressiveNN associated with the InverseAutoregressiveFlow

inverse
(y, *args, **kwargs)[source]¶ Parameters: y (torch.autograd.Variable) – the output of the bijection Inverts y => x. As noted above, this implementation is incapable of inverting arbitrary values y; rather it assumes y is the result of a previously computed application of the bijector to some x (which was cached on the forward call)