HybridSmoother implements an incremental fixed-lag smoother for hybrid systems. Unlike a full ISAM approach which keeps the entire history, a fixed-lag smoother marginalizes out older variables to maintain a bounded-size active window.
It includes features for:
Storing all factors (
HybridNonlinearFactorGraph).Maintaining a linearization point (
Values).Holding the current posterior (
HybridBayesNet).Optionally removing “dead modes” based on a marginal probability threshold (
marginalThreshold_).An
updatemethod to incorporate new factors.A
relinearizemethod.
import gtsam
import numpy as np
from gtsam import (
HybridSmoother,
HybridNonlinearFactorGraph,
PriorFactorPose2, BetweenFactorPose2, Pose2, Point3,
DecisionTreeFactor, HybridNonlinearFactor,
Values
)
from gtsam.symbol_shorthand import X, DInitialization¶
Constructed optionally with a marginalThreshold for dead mode removal. It starts with an empty internal state.
# Initialize without dead mode removal
smoother1 = gtsam.HybridSmoother()
print("Initialized Smoother 1 (no threshold)")
# Initialize with dead mode removal threshold
threshold = 0.99
smoother2 = gtsam.HybridSmoother(marginalThreshold=threshold)
print(f"Initialized Smoother 2 (threshold={threshold})")
print(f" Smoother 2 initial fixed values: {smoother2.fixedValues()}")Initialized Smoother 1 (no threshold)
Initialized Smoother 2 (threshold=0.99)
Smoother 2 initial fixed values: DiscreteValues{}
Update Steps¶
The update method incorporates new nonlinear factors and initial estimates. It performs linearization, updates the internal HybridBayesNet posterior (by eliminating variables related to the new factors against the current posterior), and updates the linearization point. Pruning based on maxNrLeaves and marginalThreshold occurs during this step.
smoother = HybridSmoother(marginalThreshold=0.99)
# --- Initial Step (Pose 0 and Mode Prior) ---
step0_graph = HybridNonlinearFactorGraph()
step0_values = Values()
dk0 = (D(0), 2)
prior_noise = gtsam.noiseModel.Diagonal.Sigmas(Point3(0.1, 0.1, 0.05))
step0_graph.add(PriorFactorPose2(X(0), Pose2(0, 0, 0), prior_noise))
step0_values.insert(X(0), Pose2(0.0, 0.0, 0.0)) # Initial estimate for X0
step0_graph.add(DecisionTreeFactor([dk0], "0.995 0.005")) # High prior on D0=0
print("--- Update 0 ---")
smoother.update(step0_graph, step0_values, maxNrLeaves=10)
print("Smoother state after update 0:")
print(f" Lin Point Size: {smoother.linearizationPoint().size()}")
print(f" Factors Size: {smoother.allFactors().size()}")
print(" Posterior HybridBayesNet:")
smoother.hybridBayesNet().print()
print(f" Fixed Values: {smoother.fixedValues()}")
# --- Second Step (Pose 1 and Hybrid Odometry) ---
step1_graph = HybridNonlinearFactorGraph()
step1_values = Values()
noise0 = gtsam.noiseModel.Diagonal.Sigmas(Point3(0.1, 0.1, np.radians(1)))
odom0 = BetweenFactorPose2(X(0), X(1), Pose2(1.0, 0, 0), noise0)
noise1 = gtsam.noiseModel.Diagonal.Sigmas(Point3(0.5, 0.5, np.radians(10)))
odom1 = BetweenFactorPose2(X(0), X(1), Pose2(1.0, 0, 0), noise1)
hybrid_odom = HybridNonlinearFactor(dk0, [odom0, odom1])
step1_graph.add(hybrid_odom)
x0_estimate = smoother.linearizationPoint().atPose2(X(0))
x1_initial_guess = x0_estimate.compose(Pose2(1.0, 0, 0))
step1_values.insert(X(0), x0_estimate)
step1_values.insert(X(1), x1_initial_guess)
print("\n--- Update 1 ---")
smoother.update(step1_graph, step1_values, maxNrLeaves=10)
print("Smoother state after update 1:")
print(f" Lin Point Size: {smoother.linearizationPoint().size()}")
print(" Posterior HybridBayesNet:")
smoother.hybridBayesNet().print()
print(f" Fixed Values: {smoother.fixedValues()}")--- Update 0 ---
Smoother state after update 0:
Lin Point Size: 1
Factors Size: 2
Posterior HybridBayesNet:
HybridBayesNet
size: 1
conditional 0: p(x0)
R = [ 10 0 0 ]
[ 0 10 0 ]
[ 0 0 20 ]
d = [ 0 0 0 ]
mean: 1 elements
x0: 0 0 0
logNormalizationConstant: 4.84409
No noise model
Fixed Values: DiscreteValues{7205759403792793600: 0}
--- Update 1 ---
Smoother state after update 1:
Lin Point Size: 2
Posterior HybridBayesNet:
HybridBayesNet
size: 2
conditional 0: p(x0 | x1)
R = [ 14.1421 0 0 ]
[ 0 14.1421 7.07107 ]
[ 0 0 61.0967 ]
S[x1] = [ -7.07107 0 0 ]
[ 0 -7.07107 0 ]
[ 0 -0.818375 -53.7313 ]
d = [ 0 0 0 ]
logNormalizationConstant: 6.65396
No noise model
conditional 1: p(x1)
R = [ 7.07107 0 0 ]
[ 0 7.02355 -6.2607 ]
[ 0 0 18.8827 ]
d = [ 0 0 0 ]
mean: 1 elements
x1: 0 0 0
logNormalizationConstant: 4.08671
No noise model
Fixed Values: DiscreteValues{7205759403792793600: 0}
Accessing State and Optimization¶
Allows accessing the current linearization point, the posterior HybridBayesNet, and optimizing the posterior.
# Get the current linearization point
lin_point = smoother.linearizationPoint()
print("\nCurrent Linearization Point:")
lin_point.print()
# Get the posterior Bayes Net
posterior_hbn = smoother.hybridBayesNet()
print("\nCurrent Posterior HybridBayesNet:")
posterior_hbn.print()
# Get fixed values determined during updates
fixed_vals = smoother.fixedValues()
print(f"\nCurrent Fixed Values: {fixed_vals}")
# Optimize the current posterior Bayes Net (may implicitly use fixed_vals)
map_solution = smoother.optimize()
print("\nMAP Solution from Smoother:")
map_solution.print()
# Note: The solution should respect the fixed values.
if D(0) in fixed_vals:
print(f" Solution D(0): {map_solution.discrete()[D(0)]}, Fixed D(0): {fixed_vals[D(0)]}")
Current Linearization Point:
Values with 2 values:
Value x0: (gtsam::Pose2)
(0, 0, 0)
Value x1: (gtsam::Pose2)
(1, 0, 0)
Current Posterior HybridBayesNet:
HybridBayesNet
size: 2
conditional 0: p(x0 | x1)
R = [ 14.1421 0 0 ]
[ 0 14.1421 7.07107 ]
[ 0 0 61.0967 ]
S[x1] = [ -7.07107 0 0 ]
[ 0 -7.07107 0 ]
[ 0 -0.818375 -53.7313 ]
d = [ 0 0 0 ]
logNormalizationConstant: 6.65396
No noise model
conditional 1: p(x1)
R = [ 7.07107 0 0 ]
[ 0 7.02355 -6.2607 ]
[ 0 0 18.8827 ]
d = [ 0 0 0 ]
mean: 1 elements
x1: 0 0 0
logNormalizationConstant: 4.08671
No noise model
Current Fixed Values: DiscreteValues{7205759403792793600: 0}
MAP Solution from Smoother:
HybridValues:
Continuous: 2 elements
x0: 0 0 0
x1: 0 0 0
Discrete: (d0, 0)
Nonlinear
Values with 0 values:
Solution D(0): 0, Fixed D(0): 0
Relinearization¶
The relinearize method rebuilds the posterior HybridBayesNet by relinearizing all stored factors (allFactors_) around the current linearization point.
print("\nRelinearizing...")
# This might be computationally expensive as it involves all factors
try:
smoother.relinearize()
print("Relinearization complete. Posterior HybridBayesNet:")
smoother.hybridBayesNet().print()
# Optimize again after relinearization
map_solution_relinearized = smoother.optimize()
print("\nMAP Solution after relinearization:")
map_solution_relinearized.print()
except Exception as e:
print(f"Relinearization failed: {e}")
Relinearizing...
Relinearization complete. Posterior HybridBayesNet:
HybridBayesNet
size: 2
conditional 0: p(x1 | x0)
R = [ 10 0 0 ]
[ 0 10 0 ]
[ 0 0 57.2958 ]
S[x0] = [ -10 0 0 ]
[ 0 -10 -10 ]
[ 0 0 -57.2958 ]
d = [ 0 0 0 ]
logNormalizationConstant: 5.89658
No noise model
conditional 1: p(x0)
R = [ 10 0 0 ]
[ 0 10 0 ]
[ 0 0 20 ]
d = [ 0 0 0 ]
mean: 1 elements
x0: 0 0 0
logNormalizationConstant: 4.84409
No noise model
MAP Solution after relinearization:
HybridValues:
Continuous: 2 elements
x0: 0 0 0
x1: 0 0 0
Discrete: (d0, 0)
Nonlinear
Values with 0 values: