Skip to article frontmatterSkip to article content

GaussNewtonOptimizer

Overview

The GaussNewtonOptimizer class in GTSAM is designed to optimize nonlinear factor graphs using the Gauss-Newton algorithm. This class is particularly suited for problems where the cost function can be approximated well by a quadratic function near the minimum. The Gauss-Newton method is an iterative optimization technique that updates the solution by linearizing the nonlinear system at each iteration.

The Gauss-Newton algorithm is based on the idea of linearizing the nonlinear residuals r(x)r(x) around the current estimate xkx_k. The update step is derived from solving the normal equations:

J(xk)TJ(xk)Δx=J(xk)Tr(xk)J(x_k)^T J(x_k) \Delta x = -J(x_k)^T r(x_k)

where J(xk)J(x_k) is the Jacobian of the residuals with respect to the variables. The solution Δx\Delta x is used to update the estimate:

xk+1=xk+Δxx_{k+1} = x_k + \Delta x

This process is repeated iteratively until convergence.

Key features:

  • Iterative Optimization: The optimizer refines the solution iteratively by linearizing the nonlinear system around the current estimate.
  • Convergence Control: It provides mechanisms to control the convergence through parameters such as maximum iterations and relative error tolerance.
  • Integration with GTSAM: Seamlessly integrates with GTSAM’s factor graph framework, allowing it to be used with various types of factors and variables.

Key Methods

Please see the base class NonlinearOptimizer.

Parameters

The Gauss-Newton optimizer uses the standard optimization parameters inherited from NonlinearOptimizerParams, which include:

  • Maximum iterations
  • Relative and absolute error thresholds
  • Error function verbosity
  • Linear solver type

Usage Considerations

  • Initial Guess: The quality of the initial guess can significantly affect the convergence and performance of the Gauss-Newton optimizer.
  • Non-convexity: Since the method relies on linear approximations, it may struggle with highly non-convex problems or those with poor initial estimates.
  • Performance: The Gauss-Newton method is generally faster than other nonlinear optimization methods like Levenberg-Marquardt for problems that are well-approximated by a quadratic model near the solution.

Files