Neural Network¶
The module pyro.nn provides implementations of neural network modules that are useful in the context of deep probabilistic programming. None of these modules is really part of the core language.
AutoRegressiveNN¶

class
AutoRegressiveNN
(input_dim, hidden_dim, output_dim_multiplier=1, mask_encoding=None, permutation=None)[source]¶ Bases:
torch.nn.modules.module.Module
A simple implementation of a MADElike autoregressive neural network.
Reference: MADE: Masked Autoencoder for Distribution Estimation [arXiv:1502.03509] Mathieu Germain, Karol Gregor, Iain Murray, Hugo Larochelle
Parameters:  input_dim (int) – the dimensionality of the input
 hidden_dim (int) – the dimensionality of the hidden units
 output_dim_multiplier (int) – the dimensionality of the output is given by input_dim x output_dim_multiplier. specifically the shape of the output for a single vector input is [output_dim_multiplier, input_dim]. for any i, j in range(0, output_dim_multiplier) the subset of outputs [i, :] has identical autoregressive structure to [j, :]. defaults to 1
 mask_encoding (torch.LongTensor) – a torch Tensor that controls the autoregressive structure (see reference). by default this is chosen at random.
 permutation (torch.LongTensor) – an optional permutation that is applied to the inputs and controls the order of the autoregressive factorization. in particular for the identity permutation the autoregressive structure is such that the Jacobian is upper triangular. by default this is chosen at random.

class
MaskedLinear
(in_features, out_features, mask, bias=True)[source]¶ Bases:
torch.nn.modules.linear.Linear
A linear mapping with a given mask on the weights (arbitrary bias)
Parameters:  in_features (int) – the number of input features
 out_features (int) – the number of output features
 mask (torch.Tensor) – the mask to apply to the in_features x out_features weight matrix
 bias (bool) – whether or not MaskedLinear should include a bias term. defaults to True