Overview¶
The NonlinearConjugateGradientOptimizer
class in GTSAM is an implementation of the nonlinear conjugate gradient method for optimizing nonlinear functions. This optimizer is particularly useful for solving large-scale optimization problems where the Hessian matrix is not easily computed or stored. The conjugate gradient method is an iterative algorithm that seeks to find the minimum of a function by following a series of conjugate directions.
The nonlinear conjugate gradient method seeks to minimize a nonlinear function by iteratively updating the solution according to:
where is the search direction and is the step size determined by a line search. The search direction is computed using the gradient of the function and a conjugate gradient update formula, such as the Fletcher-Reeves or Polak-Ribiere formulas:
Fletcher-Reeves:
Polak-Ribiere:
The choice of affects the convergence properties of the algorithm.
Key features:
- Optimization Method: Implements the nonlinear conjugate gradient method, which is an extension of the linear conjugate gradient method to nonlinear optimization problems.
- Efficiency: Suitable for large-scale problems due to its iterative nature and reduced memory requirements compared to methods that require the Hessian matrix.
- Flexibility: Can be used with various line search strategies and conjugate gradient update formulas.
Key Methods¶
Please see the base class NonlinearOptimizer.
Parameters¶
The nonlinear conjugate gradient optimizer uses the standard optimization parameters inherited from NonlinearOptimizerParams
, which include:
- Maximum iterations
- Relative and absolute error thresholds
- Error function verbosity
- Linear solver type
Usage Notes¶
- The
NonlinearConjugateGradientOptimizer
is most effective when the problem size is large and the computation of the Hessian is impractical. - Users should choose an appropriate line search method and conjugate gradient update formula based on the specific characteristics of their optimization problem.
- Monitoring the error and values during optimization can provide insights into the convergence behavior and help diagnose potential issues.