Skip to article frontmatterSkip to article content

LevenbergMarquardtOptimizer

Overview

The LevenbergMarquardtOptimizer class in GTSAM is a specialized optimizer that implements the Levenberg-Marquardt algorithm. This algorithm is a popular choice for solving non-linear least squares problems, which are common in various applications such as computer vision, robotics, and machine learning.

The Levenberg-Marquardt algorithm is an iterative technique that interpolates between the Gauss-Newton algorithm and the method of gradient descent. It is particularly useful for optimizing problems where the solution is expected to be near the initial guess.

The Levenberg-Marquardt algorithm seeks to minimize a cost function F(x)F(x) of the form:

F(x)=12i=1mri(x)2F(x) = \frac{1}{2} \sum_{i=1}^{m} r_i(x)^2

where ri(x)r_i(x) are the residuals. The update rule for the algorithm is given by:

xk+1=xk(JTJ+λI)1JTrx_{k+1} = x_k - (J^T J + \lambda I)^{-1} J^T r

Here, JJ is the Jacobian matrix of the residuals, λ is the damping parameter, and II is the identity matrix.

Key features:

  • Non-linear Optimization: The class is designed to handle non-linear optimization problems efficiently.
  • Damping Mechanism: It incorporates a damping parameter to control the step size, balancing between the Gauss-Newton and gradient descent methods.
  • Iterative Improvement: The optimizer iteratively refines the solution, reducing the error at each step.

Key Methods

Please see the base class NonlinearOptimizer.

Parameters

The LevenbergMarquardtParams class defines parameters specific to this optimization algorithm:

ParameterTypeDefault ValueDescription
lambdaInitialdouble1e-5The initial Levenberg-Marquardt damping term
lambdaFactordouble10.0The amount by which to multiply or divide lambda when adjusting lambda
lambdaUpperBounddouble1e5The maximum lambda to try before assuming the optimization has failed
lambdaLowerBounddouble0.0The minimum lambda used in LM
verbosityLMVerbosityLMSILENTThe verbosity level for Levenberg-Marquardt
minModelFidelitydouble1e-3Lower bound for the modelFidelity to accept the result of an LM iteration
logFilestd::string“”An optional CSV log file, with [iteration, time, error, lambda]
diagonalDampingboolfalseIf true, use diagonal of Hessian
useFixedLambdaFactorbooltrueIf true applies constant increase (or decrease) to lambda according to lambdaFactor
minDiagonaldouble1e-6When using diagonal damping saturates the minimum diagonal entries
maxDiagonaldouble1e32When using diagonal damping saturates the maximum diagonal entries

These parameters complement the standard optimization parameters inherited from NonlinearOptimizerParams, which include:

  • Maximum iterations
  • Relative and absolute error thresholds
  • Error function verbosity
  • Linear solver type

Usage Notes

  • The choice of the initial guess can significantly affect the convergence speed and the quality of the solution.
  • Proper tuning of the damping parameter λ is crucial for balancing the convergence rate and stability.
  • The optimizer is most effective when the residuals are approximately linear near the solution.

Files