Overview¶
The NonlinearOptimizer
class in GTSAM is a the base class for (batch) nonlinear optimization solvers. It provides the basic API for optimizing nonlinear factor graphs, commonly used in robotics and computer vision applications.
The primary purpose of the NonlinearOptimizer
is to iteratively refine an initial estimate of a solution to minimize a nonlinear cost function. Specific optimization algorithms like Gauss-Newton, Levenberg-Marquardt, and Dogleg and implemented in derived class.
Mathematical Foundation¶
The optimization process in NonlinearOptimizer
is based on iterative methods that solve for the minimum of a nonlinear cost function. The general approach involves linearizing the nonlinear problem at the current estimate and solving the resulting linear system to update the estimate. This process is repeated until convergence criteria are met.
The optimization problem can be formally defined as:
where is the vector of variables to be optimized, and are the residuals of the factors in the graph.
Key Methods¶
- The
optimize()
method is the core function of theNonlinearOptimizer
class. It performs the optimization process, iteratively updating the estimate to converge to a local minimum of the cost function. - The
error()
method computes the total error of the current estimate. This is typically the sum of squared errors for all factors in the graph. Mathematically, the error can be expressed as: where represents the residual error of the -th factor. - The
values()
method returns the current set of variable estimates. These estimates are updated during the optimization process. - The
iterations()
method provides the number of iterations performed during the optimization process. This can be useful for analyzing the convergence behavior of the optimizer. - The
params()
method returns the parameters used by the optimizer. These parameters can include settings like convergence thresholds, maximum iterations, and other algorithm-specific options.
Usage¶
The NonlinearOptimizer
class is typically not used directly. Instead, one of its derived classes, such as GaussNewtonOptimizer
, LevenbergMarquardtOptimizer
, or DoglegOptimizer
, is used to perform specific types of optimization. These derived classes implement the optimize()
method according to their respective algorithms.