Overview¶
LpProblem represents a linear program over direct Vector and Matrix entries in Values. It combines linear costs with linear equality and inequality constraints and solves them with ActiveSetSolver.
LPs use the sparse active-set path. The dense QP subproblem option applies only to quadratic programs.
import gtsam
import numpy as npProblem Model¶
An LpProblem stores:
linear costs added with
addCost(LpCost)oraddCost(JacobianFactor),equality constraints added with
LinearConstraint::Equal(...),inequality constraints added with
LinearConstraint::LessEqual(...)orLinearConstraint::GreaterEqual(...).
The objective is linear in the vectorized values. Constraints use the same LinearConstraint class as QPs, so row ordering, signs, and matrix-value vectorization are consistent across LP and QP problems.
Minimal C++ Example¶
using namespace gtsam;
const Key x = Symbol('x', 0);
LpProblem problem;
problem.addCost(JacobianFactor(
x, (Matrix(1, 1) << -1.0).finished(), Vector::Zero(1)));
problem.addConstraint(LinearConstraint::LessEqual(JacobianFactor(
x, Matrix::Identity(1, 1), (Vector(1) << 1.0).finished())));
problem.addConstraint(LinearConstraint::GreaterEqual(JacobianFactor(
x, Matrix::Identity(1, 1), (Vector(1) << 0.0).finished())));
Values initial;
initial.insert(x, (Vector(1) << 0.0).finished());
Values result = problem.optimize(initial);This example maximizes x by minimizing -x subject to 0 <= x <= 1.
Minimal Python Example¶
from gtsam.symbol_shorthand import X
x = X(0)
model = gtsam.noiseModel.Unit.Create(1)
problem = gtsam.LpProblem()
problem.addCost(gtsam.JacobianFactor(x, np.array([[-1.0]]), np.zeros(1), model))
problem.addConstraint(
gtsam.LinearConstraint.LessEqual(
gtsam.JacobianFactor(x, np.eye(1), np.array([1.0]), model)
)
)
problem.addConstraint(
gtsam.LinearConstraint.GreaterEqual(
gtsam.JacobianFactor(x, np.eye(1), np.array([0.0]), model)
)
)
initial = gtsam.Values()
initial.insert(x, np.array([0.0]))
result = problem.optimize(initial)
result.atVector(x)array([1.])Solving an LP¶
The convenience methods delegate to ActiveSetSolver:
Values result = problem.optimize(initial);
Values resultWithoutInitial = problem.optimize();When no initial point is supplied, the solver builds a phase-I LP with one slack variable to find a vector-valued feasible point. If the phase-I slack remains positive beyond phaseOneFeasibilityTolerance, the LP is reported infeasible.
Active-Set Behavior¶
For LPs, each active-set iteration projects the negative objective gradient onto the current active constraint surface. If that direction is blocked by an inactive inequality, the solver steps to the first blocking row and activates it. If there is no blocking inequality and the projected direction descends forever, the solver reports the LP as unbounded.
The solver checks caller-provided initial values for feasibility. Infeasible initial values throw std::invalid_argument; unbounded LPs throw std::runtime_error.
Parameters and Warm Starts¶
Use ActiveSetSolverParams to tune tolerances and iteration limits:
auto params = std::make_shared<ActiveSetSolverParams>();
params->feasibilityTolerance = 1e-8;
params->multiplierTolerance = 1e-8;
ActiveSetSolver solver(problem, params);
auto [values, state] = solver.optimizeWithState(initial);ActiveSetSolverState stores optimized values, row-ordered equality and inequality multipliers, the flattened active inequality rows, convergence status, and iteration count. Passing a previous state back to optimizeWithState warm-starts the active set when the constraint row layout is unchanged.
Practical Guidance¶
Use LpProblem for linear objectives with linear constraints. Use QpProblem when the objective has a quadratic Hessian term.
LP active-set performance depends strongly on the number of constraints that become active and on the sparsity of the constraint rows. Start with the default sparse active-set solver. Tighten tolerances only when the application needs stricter feasibility or multiplier checks.