Code in pyro.contrib.forecast is under development. This code makes no guarantee about maintaining backwards compatibility.

pyro.contrib.forecast is a lightweight framework for experimenting with a restricted class of time series models and inference algorithms using familiar Pyro modeling syntax and PyTorch neural networks.

Models include hierarchical multivariate heavy-tailed time series of ~1000 time steps and ~1000 separate series. Inference combines subsample-compatible variational inference with Gaussian variable elimination based on the GaussianHMM class. Inference using Hamiltonian Monte Carlo sampling is also supported with HMCForecaster. Forecasts are in the form of joint posterior samples at multiple future time steps.

Hierarchical models use the familiar plate syntax for general hierarchical modeling in Pyro. Plates can be subsampled, enabling training of joint models over thousands of time series. Multivariate observations are handled via multivariate likelihoods like MultivariateNormal, GaussianHMM, or LinearHMM. Heavy tailed models are possible by using StudentT or Stable likelihoods, possibly together with LinearHMM and reparameterizers including StudentTReparam, StableReparam, and LinearHMMReparam.

Seasonality can be handled using the helpers periodic_repeat(), periodic_cumsum(), and periodic_features().

See pyro.contrib.timeseries for ways to construct temporal Gaussian processes useful as likelihoods.

For example usage see:

Forecaster Interface

class ForecastingModel[source]

Bases: pyro.nn.module.PyroModule

Abstract base class for forecasting models.

Derived classes must implement the model() method.

model(zero_data, covariates)[source]

Generative model definition.

Implementations must call the predict() method exactly once.

Implementations must draw all time-dependent noise inside the time_plate(). The prediction passed to predict() must be a deterministic function of noise tensors that are independent over time. This requirement is slightly more general than state space models.

  • zero_data (Tensor) – A zero tensor like the input data, but extended to the duration of the time_plate(). This allows models to depend on the shape and device of data but not its value.
  • covariates (Tensor) – A tensor of covariates with time dimension -2.

Return value is ignored.

Returns:A plate named “time” with size covariates.size(-2) and dim=-1. This is available only during model execution.
Return type:plate
predict(noise_dist, prediction)[source]

Prediction function, to be called by model() implementations.

This should be called outside of the time_plate().

This is similar to an observe statement in Pyro:

pyro.sample("residual", noise_dist,
            obs=(data - prediction))

but with (1) additional reshaping logic to allow time-dependent noise_dist (most often a GaussianHMM or variant); and (2) additional logic to allow only a partial observation and forecast the remaining data.

  • noise_dist (Distribution) – A noise distribution with .event_dim in {0,1,2}. noise_dist is typically zero-mean or zero-median or zero-mode or somehow centered.
  • prediction (Tensor) – A prediction for the data. This should have the same shape as data, but broadcastable to full duration of the covariates.
forward(data, covariates)[source]
class Forecaster(model, data, covariates, *, guide=None, init_scale=0.1, create_plates=None, learning_rate=0.01, betas=(0.9, 0.99), learning_rate_decay=0.1, dct_gradients=False, num_steps=1001, num_particles=1, vectorize_particles=True, warm_start=False, log_every=100, clip_norm=10.0)[source]

Bases: torch.nn.modules.module.Module

Forecaster for a ForecastingModel.

On initialization, this fits a distribution using variational inference over latent variables and exact inference over the noise distribution, typically a GaussianHMM or variant.

After construction this can be called to generate sample forecasts.


losses (list) – A list of losses recorded during training, typically used to debug convergence. Defined by loss = -elbo / data.numel().

  • model (ForecastingModel) – A forecasting model subclass instance.
  • data (Tensor) – A tensor dataset with time dimension -2.
  • covariates (Tensor) – A tensor of covariates with time dimension -2. For models not using covariates, pass a shaped empty tensor torch.empty(duration, 0).
  • guide (PyroModule) – Optional guide instance. Defaults to a AutoNormal.
  • init_scale (float) – Initial uncertainty scale of the AutoNormal guide.
  • create_plates (callable) – An optional function to create plates for subsampling with the AutoNormal guide.
  • learning_rate (float) – Learning rate used by DCTAdam.
  • betas (tuple) – Coefficients for running averages used by DCTAdam.
  • learning_rate_decay (float) – Learning rate decay used by DCTAdam. Note this is the total decay over all num_steps, not the per-step decay factor.
  • dct_gradients (bool) – Whether to discrete cosine transform gradients in DCTAdam. Defaults to False.
  • num_steps (int) – Number of SVI steps.
  • num_particles (int) – Number of particles used to compute the ELBO.
  • vectorize_particles (bool) – If num_particles > 1, determines whether to vectorize computation of the ELBO. Defaults to True. Set to False for models with dynamic control flow.
  • warm_start (bool) – Whether to warm start parameters from a smaller time window. Note this may introduce statistical leakage; usage is recommended for model exploration purposes only and should be disabled when publishing metrics.
  • log_every (int) – Number of training steps between logging messages.
  • clip_norm (float) – Norm used for gradient clipping during optimization. Defaults to 10.0.
forward(data, covariates, num_samples)[source]
class HMCForecaster(model, data, covariates=None, *, num_warmup=1000, num_samples=1000, num_chains=1, dense_mass=False, jit_compile=False, max_tree_depth=10)[source]

Bases: torch.nn.modules.module.Module

Forecaster for a ForecastingModel using Hamiltonian Monte Carlo.

On initialization, this will run NUTS sampler to get posterior samples of the model.

After construction, this can be called to generate sample forecasts.

  • model (ForecastingModel) – A forecasting model subclass instance.
  • data (Tensor) – A tensor dataset with time dimension -2.
  • covariates (Tensor) – A tensor of covariates with time dimension -2. For models not using covariates, pass a shaped empty tensor torch.empty(duration, 0).
  • num_warmup (int) – number of MCMC warmup steps.
  • num_samples (int) – number of MCMC samples.
  • num_chains (int) – number of parallel MCMC chains.
  • dense_mass (bool) – a flag to control whether the mass matrix is dense or diagonal. Defaults to False.
  • jit_compile (bool) – whether to use the PyTorch JIT to trace the log density computation, and use this optimized executable trace in the integrator. Defaults to False.
  • max_tree_depth (int) – Max depth of the binary tree created during the doubling scheme of the NUTS sampler. Defaults to 10.
forward(data, covariates, num_samples)[source]


eval_mae(pred, truth)[source]

Evaluate mean absolute error, using sample median as point estimate.

Return type:


eval_rmse(pred, truth)[source]

Evaluate root mean squared error, using sample mean as point estimate.

Return type:


eval_crps(pred, truth)[source]

Evaluate continuous ranked probability score, averaged over all data elements.


[1] Tilmann Gneiting, Adrian E. Raftery (2007)
Strictly Proper Scoring Rules, Prediction, and Estimation
Return type:


backtest(data, covariates, model_fn, *, forecaster_fn=<class 'pyro.contrib.forecast.forecaster.Forecaster'>, metrics=None, transform=None, train_window=None, min_train_window=1, test_window=None, min_test_window=1, stride=1, seed=1234567890, num_samples=100, forecaster_options={})[source]

Backtest a forecasting model on a moving window of (train,test) data.

  • data (Tensor) – A tensor dataset with time dimension -2.
  • covariates (Tensor) – A tensor of covariates with time dimension -2. For models not using covariates, pass a shaped empty tensor torch.empty(duration, 0).
  • model_fn (callable) – Function that returns an ForecastingModel object.
  • forecaster_fn (callable) – Function that returns a forecaster object (for example, Forecaster or HMCForecaster) given arguments model, training data, training covariates and keyword arguments defined in forecaster_options.
  • metrics (dict) – A dictionary mapping metric name to metric function. The metric function should input a forecast pred and ground truth and can output anything, often a number. Example metrics include: eval_mae(), eval_rmse(), and eval_crps().
  • transform (callable) – An optional transform to apply before computing metrics. If provided this will be applied as pred, truth = transform(pred, truth).
  • train_window (int) – Size of the training window. Be default trains from beginning of data. This must be None if forecaster is Forecaster and forecaster_options["warm_start"] is true.
  • min_train_window (int) – If train_window is None, this specifies the min training window size. Defaults to 1.
  • test_window (int) – Size of the test window. By default forecasts to end of data.
  • min_test_window (int) – If test_window is None, this specifies the min test window size. Defaults to 1.
  • stride (int) – Optional stride for test/train split. Defaults to 1.
  • seed (int) – Random number seed.
  • num_samples (int) – Number of samples for forecast.
  • forecaster_options (dict or callable) – Options dict to pass to forecaster, or callable inputting time window t0,t1,t2 and returning such a dict. See Forecaster for details.

A list of dictionaries of evaluation data. Caller is responsible for aggregating the per-window metrics. Dictionary keys include: train begin time “t0”, train/test split time “t1”, test end time “t2”, “seed”, “num_samples” and one key for each metric.

Return type: