Skip to article frontmatterSkip to article content

FixedLagSmoother

Overview

The FixedLagSmoother class is the base class for BatchFixedLagSmoother and IncrementalFixedLagSmoother.

It provides an API for fixed-lag smoothing in nonlinear factor graphs. It maintains a sliding window of the most recent variables and marginalizes out older variables. This is particularly useful in real-time applications where memory and computational efficiency are critical.

Mathematical Formulation

In fixed-lag smoothing the objective is to estimate the state xt\mathbf{x}_t given all measurements up to time tt, but only retaining a fixed window of recent states. The optimization problem can be expressed as:

minxtL:ti=1Nhi(xtL:t)zi2\min_{\mathbf{x}_{t-L:t}} \sum_{i=1}^{N} \| \mathbf{h}_i(\mathbf{x}_{t-L:t}) - \mathbf{z}_i \|^2

where LL is the fixed lag, hi\mathbf{h}_i are the measurement functions, and zi\mathbf{z}_i are the measurements. In practice, the functions hi\mathbf{h}_i depend only on a subset of the state variables Xi\mathbf{X}_i, and the optimization is performed over a set of NN factors ϕi\phi_i instead:

minxtL:ti=1Nϕi(Xi;zi)2\min_{\mathbf{x}_{t-L:t}} \sum_{i=1}^{N} \| \phi_i(\mathbf{X}_i; \mathbf{z}_i) \|^2

The API below allows the user to add new factors at every iteration, which will be automatically pruned after they no longer depend on any variables in the lag.