The module pyro.nn provides implementations of neural network modules that are useful in the context of deep probabilistic programming. None of these modules is really part of the core language.
AutoRegressiveNN(input_dim, hidden_dim, output_dim_multiplier=1, mask_encoding=None, permutation=None)¶
A simple implementation of a MADE-like auto-regressive neural network.
Reference: MADE: Masked Autoencoder for Distribution Estimation [arXiv:1502.03509] Mathieu Germain, Karol Gregor, Iain Murray, Hugo Larochelle
- input_dim (int) – the dimensionality of the input
- hidden_dim (int) – the dimensionality of the hidden units
- output_dim_multiplier (int) – the dimensionality of the output is given by input_dim x output_dim_multiplier. specifically the shape of the output for a single vector input is [output_dim_multiplier, input_dim]. for any i, j in range(0, output_dim_multiplier) the subset of outputs [i, :] has identical autoregressive structure to [j, :]. defaults to 1
- mask_encoding (torch.LongTensor) – a torch Tensor that controls the autoregressive structure (see reference). by default this is chosen at random.
- permutation (torch.LongTensor) – an optional permutation that is applied to the inputs and controls the order of the autoregressive factorization. in particular for the identity permutation the autoregressive structure is such that the Jacobian is upper triangular. by default this is chosen at random.
the forward method
Get the mask encoding associated with the neural network: basically the quantity m(k) in the MADE paper.
Get the permutation applied to the inputs (by default this is chosen at random)
MaskedLinear(in_features, out_features, mask, bias=True)¶
A linear mapping with a given mask on the weights (arbitrary bias)
the forward method that does the masked linear computation and returns the result