Overview¶
The Marginals class in GTSAM computes Gaussian marginals from a nonlinear or linear factor graph around a chosen linearization point. In the common nonlinear case, the usual pattern is:
build a
NonlinearFactorGraph,optimize it to obtain a solution
Values, andconstruct
Marginals(graph, result)to query uncertainty.
The most common queries are single-variable marginal covariance, single-variable marginal information, and joint covariance or information for a small set of variables. Internally, GTSAM uses the Gaussian Bayes tree to answer these queries efficiently, but users normally just call the Marginals methods directly.
For the algorithmic story behind Bayes-tree covariance recovery, see the blog post Fast Covariance Recovery in GTSAM Bayes Trees. For full technical details, see CovarianceRecovery
import numpy as np
import gtsam
from gtsam.symbol_shorthand import L, XWhat Marginals returns¶
The main methods are:
marginalCovariance(key): dense covariance matrix for one variablemarginalInformation(key): dense information matrix for one variablejointMarginalCovariance(keys):JointMarginalcontaining a joint covariance over several variablesjointMarginalInformation(keys):JointMarginalcontaining a joint information matrix over several variables
A JointMarginal lets you either access the full dense matrix with fullMatrix() or request specific blocks with at(key_i, key_j). In practice, at(...) is the safest way to retrieve blocks because it avoids relying on the internal block layout.
A small planar SLAM example¶
The example below creates a tiny nonlinear SLAM problem with three Pose2 variables and two Point2 landmarks. After optimizing the graph, we use Marginals to recover several different uncertainty queries.
Source
def create_planar_slam_problem():
graph = gtsam.NonlinearFactorGraph()
x1 = X(1)
x2 = X(2)
x3 = X(3)
l1 = L(1)
l2 = L(2)
prior_noise = gtsam.noiseModel.Diagonal.Sigmas(np.array([0.3, 0.3, 0.1]))
odometry_noise = gtsam.noiseModel.Diagonal.Sigmas(np.array([0.2, 0.2, 0.1]))
measurement_noise = gtsam.noiseModel.Diagonal.Sigmas(np.array([0.1, 0.2]))
graph.addPriorPose2(x1, gtsam.Pose2(0.0, 0.0, 0.0), prior_noise)
graph.add(gtsam.BetweenFactorPose2(x1, x2, gtsam.Pose2(2.0, 0.0, 0.0), odometry_noise))
graph.add(gtsam.BetweenFactorPose2(x2, x3, gtsam.Pose2(2.0, 0.0, 0.0), odometry_noise))
graph.add(gtsam.BearingRangeFactor2D(x1, l1, gtsam.Rot2.fromDegrees(45), np.sqrt(8.0), measurement_noise))
graph.add(gtsam.BearingRangeFactor2D(x2, l1, gtsam.Rot2.fromDegrees(90), 2.0, measurement_noise))
graph.add(gtsam.BearingRangeFactor2D(x3, l2, gtsam.Rot2.fromDegrees(90), 2.0, measurement_noise))
initial = gtsam.Values()
initial.insert(x1, gtsam.Pose2(0.1, -0.1, 0.05))
initial.insert(x2, gtsam.Pose2(2.1, 0.1, -0.02))
initial.insert(x3, gtsam.Pose2(4.2, -0.1, 0.04))
initial.insert(l1, gtsam.Point2(1.8, 2.2))
initial.insert(l2, gtsam.Point2(4.2, 2.1))
return graph, initial, (x1, x2, x3, l1, l2)graph, initial, (x1, x2, x3, l1, l2) = create_planar_slam_problem()
params = gtsam.LevenbergMarquardtParams()
optimizer = gtsam.LevenbergMarquardtOptimizer(graph, initial, params)
result = optimizer.optimize()
marginals = gtsam.Marginals(graph, result)Single-variable queries¶
For one variable, Marginals can return either the covariance matrix or the information matrix.
pose2_covariance = marginals.marginalCovariance(x2)
pose2_information = marginals.marginalInformation(x2)
print("Covariance of x2:\n", np.round(pose2_covariance, 4))
print("Information of x2:\n", np.round(pose2_information, 4))Covariance of x2:
[[ 0.121 -0.0013 0.0045]
[-0.0013 0.1584 0.0206]
[ 0.0045 0.0206 0.0177]]
Information of x2:
[[ 8.3682 0.4077 -2.6045]
[ 0.4077 7.4623 -8.7872]
[-2.6045 -8.7872 67.2517]]
Joint queries¶
For several variables, Marginals returns a JointMarginal. You can inspect the full dense matrix, or retrieve individual covariance blocks by key.
joint_covariance = marginals.jointMarginalCovariance([x2, l1, x3])
joint_information = marginals.jointMarginalInformation([x2, l1, x3])
print("Full joint covariance:\n", np.round(joint_covariance.fullMatrix(), 4))
print("Covariance block Cov[x2, l1]:\n", np.round(joint_covariance.at(x2, l1), 4))
print("Information block Info[x3, x3]:\n", np.round(joint_information.at(x3, x3), 4))Full joint covariance:
[[ 0.1687 -0.0477 0.1029 -0.0439 -0.0265 0.1029 -0.0968 -0.0265]
[-0.0477 0.1635 -0.0026 0.1468 0.0213 -0.0026 0.1894 0.0213]
[ 0.1029 -0.0026 0.121 -0.0013 0.0045 0.121 0.0077 0.0045]
[-0.0439 0.1468 -0.0013 0.1584 0.0206 -0.0013 0.1997 0.0206]
[-0.0265 0.0213 0.0045 0.0206 0.0177 0.0045 0.0561 0.0177]
[ 0.1029 -0.0026 0.121 -0.0013 0.0045 0.161 0.0077 0.0045]
[-0.0968 0.1894 0.0077 0.1997 0.0561 0.0077 0.3519 0.0561]
[-0.0265 0.0213 0.0045 0.0206 0.0177 0.0045 0.0561 0.0277]]
Covariance block Cov[x2, l1]:
[[ 0.1029 -0.0026]
[-0.0439 0.1468]
[-0.0265 0.0213]]
Information block Info[x3, x3]:
[[ 25. -0. -0.]
[ -0. 25. -0.]
[ -0. -0. 100.]]
Performance guidance¶
Users normally do not need to manage Bayes trees directly when working with Marginals. GTSAM builds or reuses the Gaussian Bayes tree internally and exploits its structure when answering covariance queries.
In practice, that means:
single-variable queries are already very efficient,
two-variable joint queries are also already localized and efficient, and
larger joint queries are now much faster than before because the internal support extraction is more selective.
So the user-facing advice is simple: call the query you actually need, and let Marginals manage the Bayes-tree details.
See also¶
Paper: CovarianceRecovery
.pdf Header:
Marginals.h