Skip to article frontmatterSkip to article content

ExtendedPriorFactor

Purpose and Audience

ExtendedPriorFactor is the generalized building block that underlies PriorFactor and other soft anchoring mechanisms. It lets you express an (optionally shifted) Gaussian (or robust) likelihood in the tangent space of an arbitrary manifold value type. This notebook targets advanced GTSAM users designing custom factors, experimenting with non-zero tangent-space means, or working with non-traditional manifold types.

Open In Colab

Mathematical Definition

Given a value type T (Lie group or general manifold) with local coordinates operator Local(x, y) (mapping from manifold to tangent space at x), an origin value o, an optional tangent-space mean vector m (default 0), and a noise model with (robust) loss rho, the factor defines:

  • Raw error: e(x) = - Local(x, o) - m (if m provided, else -Local(x,o))

  • Loss: loss(x) = rho( || e(x) ||^2_Σ ) where Σ is the covariance inside the noise model.

  • Likelihood: L(x) = exp( - loss(x) ).

For Lie groups where Local(x,o) = Log( x^{-1} o ), this is (up to an additive constant in the exponent) the extended concentrated Gaussian:

e(x)=(Log(x1o)+m)e(x) = - (\operatorname{Log}(x^{-1} o) + m)

Choosing a non-zero mean m effectively shifts the maximum-likelihood point from o to o.retract(m).

Relation to PriorFactor

PriorFactor<T> is a specialization that sets m = 0 and names the origin prior. It also implements the Jacobian as identity by using the -Local(x, prior) form, which avoids computing derivatives of Local.

Use PriorFactor unless you explicitly need a shifted tangent-space mean or want to treat the resulting distribution as a likelihood (e.g., for probabilistic weighting or annealing strategies).

When to Use ExtendedPriorFactor Directly

  • You need a soft constraint centered at origin.retract(mean) but want to keep a fixed linearization frame at origin for numerical reasons.

  • You are implementing a factor that conceptually models a latent density over a manifold variable (e.g., pose priors from learned models producing mean offset vectors).

  • You want to evaluate normalized or relative likelihoods on candidate values (likelihood(x) convenience method).

  • You work with a robust noise model to downweight outliers in the tangent residual of a pseudo-prior.

  • You prototype new manifold types: avoiding providing analytic Jacobians for local coordinates beyond identity can simplify early experiments.

C++ API Recap

Constructor (templated on VALUE):

ExtendedPriorFactor(Key key, const VALUE& origin,
                    const SharedNoiseModel& model,
                    const std::optional<Vector>& mean = {});

Key methods:

  • Vector evaluateError(const VALUE& x, OptionalMatrixType H) returns the tangent residual (with identity Jacobian if H).

  • double error(const VALUE& x) negative log-likelihood (robust).

  • double likelihood(const VALUE& x) returns exp(-error(x)).

  • Accessors: origin(), mean().

Practical Example (Pose2)

We replicate the core logic of the unit tests (see nonlinear/tests/testExtendedPriorFactor.cpp) using Python bindings.

# Example: Pose2 ExtendedPriorFactor usage
import math
import numpy as np
import gtsam
from gtsam import Pose2, noiseModel

key = 1
origin = Pose2(1.0, 2.0, 0.3)
model = noiseModel.Isotropic.Sigma(3, 0.5)  # std dev 0.5 in all dims
mean = np.array([0.1, 0.2, 0.05])  # tangent space shift

factor = gtsam.ExtendedPriorFactorPose2(key, origin, mean, model)
# Evaluate at origin.retract(mean): should yield near-zero residual and likelihood ~ 1
x_mode = origin.retract(mean)
residual_at_mode = factor.evaluateError(x_mode)
likelihood_at_mode = factor.likelihood(x_mode)
residual_at_mode, likelihood_at_mode
(array([ 0.00987086, -0.00524786, 0. ]), 0.9997500833133725)

Interpreting the Residual

Because the Jacobian reported is the identity, this factor is numerically stable and cheap: linearization contributes just an identity term. The residual is the signed difference in local coordinates, shifted by the mean.

Adding to a Graph

You can add ExtendedPriorFactor directly to a NonlinearFactorGraph, but unlike PriorFactor, there is no sugar addPrior<T>(). You construct and add explicitly.

from gtsam import NonlinearFactorGraph, Values, LevenbergMarquardtOptimizer

graph = NonlinearFactorGraph()
graph.push_back(factor)  # our ExtendedPriorFactor

initial = Values()
# Provide an initial guess somewhat away from the mode
initial.insert(key, Pose2(1.2, 1.9, 0.25))

params = gtsam.LevenbergMarquardtParams()
optimizer = LevenbergMarquardtOptimizer(graph, initial, params)
result = optimizer.optimize()
final_pose = result.atPose2(key)
final_pose
(1.02536, 2.22216, 0.35)

Design Notes and Pitfalls

  1. The frame of the tangent space is fixed at origin. A very large mean might implicitly request motion far from the linearization region; consider re-centering or using a different modeling approach if optimization struggles.

  2. Robust models (e.g. noiseModel.Robust.Create(mEstimator, baseModel)) apply their loss to the squared Mahalanobis distance of the residual; this lets you treat the tangent residual as an outlier-prone quantity.

  3. For custom manifold types, ensure traits<T>::Local and retract (implicitly via retract if you want to compute the mode value) are implemented and consistent.

  4. Unlike some priors, this factor purposefully does not store or recompute a dynamic Jacobian; the identity assumption is a deliberate design choice.

  5. Serialization support relies on NoiseModelFactor1 for backward compatibility; keep that in mind if extending the class.

When Not to Use

  • If you simply need a zero-mean soft prior: prefer PriorFactor.

  • If you require a hard equality constraint: consider NonlinearEquality.

  • If the residual should depend on another variable or measurement, design a custom factor instead of embedding complexity in Local.

See Also

  • PriorFactor

  • Other manifold priors: BetweenFactor, NonlinearEquality, and robust variants

  • Underlying noise models: noiseModel::Isotropic, noiseModel::Diagonal, robust creators