Skip to article frontmatterSkip to article content

DoglegOptimizer

Overview

The DoglegOptimizer class in GTSAM is a specialized optimization algorithm designed for solving nonlinear least squares problems. It implements the Dogleg method, which is a hybrid approach combining the steepest descent and Gauss-Newton methods.

The Dogleg method is characterized by its use of two distinct steps:

  1. Cauchy Point: The steepest descent direction, calculated as:

    pu=αf(x)p_u = -\alpha \nabla f(x)

    where α is a scalar step size.

  2. Gauss-Newton Step: The solution to the linearized problem, providing a more accurate but computationally expensive step:

    pgn=(JTJ)1JTrp_{gn} = -(J^T J)^{-1} J^T r

    where JJ is the Jacobian matrix and rr is the residual vector.

The Dogleg step, pdlp_{dl}, is a combination of these two steps, determined by the trust region radius Δ.

Key features:

  • Hybrid Approach: Combines the strengths of both the steepest descent and Gauss-Newton methods.
  • Trust Region Method: Utilizes a trust region to determine the step size, balancing between the accuracy of Gauss-Newton and the robustness of steepest descent.
  • Efficient for Nonlinear Problems: Designed to handle complex nonlinear least squares problems effectively.

Key Methods

Please see the base class NonlinearOptimizer.

Parameters

The DoglegParams class defines parameters specific to Powell’s Dogleg optimization algorithm:

ParameterDescription
deltaInitialInitial trust region radius that controls step size (default: 1.0)
verbosityDLControls algorithm-specific diagnostic output (options: SILENT, VERBOSE)

These parameters complement the standard optimization parameters inherited from NonlinearOptimizerParams, which include:

  • Maximum iterations
  • Relative and absolute error thresholds
  • Error function verbosity
  • Linear solver type

Powell’s Dogleg algorithm combines Gauss-Newton and gradient descent approaches within a trust region framework. The deltaInitial parameter defines the initial size of this trust region, which adaptively changes during optimization based on how well the linear approximation matches the nonlinear function.

Usage Considerations

  • Initial Guess: The performance of the Dogleg optimizer can be sensitive to the initial guess. A good initial estimate can significantly speed up convergence.
  • Parameter Tuning: The choice of the initial trust region radius and other parameters can affect the convergence rate and stability of the optimization.

Files