Neural Network¶
The module pyro.nn provides implementations of neural network modules that are useful in the context of deep probabilistic programming. None of these modules is really part of the core language.
AutoRegressiveNN¶

class
AutoRegressiveNN
(input_dim, hidden_dims, param_dims=[1, 1], permutation=None, skip_connections=False, nonlinearity=ReLU())[source]¶ Bases:
torch.nn.modules.module.Module
An implementation of a MADElike autoregressive neural network.
Example usage:
>>> x = torch.randn(100, 10) >>> arn = AutoRegressiveNN(10, [50], param_dims=[1]) >>> p = arn(x) # 1 parameters of size (100, 10) >>> arn = AutoRegressiveNN(10, [50], param_dims=[1, 1]) >>> m, s = arn(x) # 2 parameters of size (100, 10) >>> arn = AutoRegressiveNN(10, [50], param_dims=[1, 5, 3]) >>> a, b, c = arn(x) # 3 parameters of sizes, (100, 1, 10), (100, 5, 10), (100, 3, 10)
Parameters:  input_dim (int) – the dimensionality of the input
 hidden_dims (list[int]) – the dimensionality of the hidden units per layer
 param_dims (list[int]) – shape the output into parameters of dimension (p_n, input_dim) for p_n in param_dims when p_n > 1 and dimension (input_dim) when p_n == 1. The default is [1, 1], i.e. output two parameters of dimension (input_dim), which is useful for inverse autoregressive flow.
 permutation (torch.LongTensor) – an optional permutation that is applied to the inputs and controls the order of the autoregressive factorization. in particular for the identity permutation the autoregressive structure is such that the Jacobian is upper triangular. By default this is chosen at random.
 skip_connections (bool) – Whether to add skip connections from the input to the output.
 nonlinearity (torch.nn.module) – The nonlinearity to use in the feedforward network such as torch.nn.ReLU(). Note that no nonlinearity is applied to the final network output, so the output is an unbounded real number.
Reference:
MADE: Masked Autoencoder for Distribution Estimation [arXiv:1502.03509] Mathieu Germain, Karol Gregor, Iain Murray, Hugo Larochelle

class
MaskedLinear
(in_features, out_features, mask, bias=True)[source]¶ Bases:
torch.nn.modules.linear.Linear
A linear mapping with a given mask on the weights (arbitrary bias)
Parameters:  in_features (int) – the number of input features
 out_features (int) – the number of output features
 mask (torch.Tensor) – the mask to apply to the in_features x out_features weight matrix
 bias (bool) – whether or not MaskedLinear should include a bias term. defaults to True

create_mask
(input_dim, observed_dim, hidden_dims, permutation, output_dim_multiplier)[source]¶ Creates MADE masks for a conditional distribution
Parameters:  input_dim (int) – the dimensionality of the input variable
 observed_dim (int) – the dimensionality of the variable that is conditioned on (for conditional densities)
 hidden_dims (list[int]) – the dimensionality of the hidden layers(s)
 permutation (torch.LongTensor) – the order of the input variables
 output_dim_multiplier (int) – tiles the output (e.g. for when a separate mean and scale parameter are desired)