Overview¶
QpProblem represents a quadratic program over direct Vector and Matrix entries in Values. It combines affine quadratic costs with linear equality and inequality constraints, then solves them with an active-set method.
The default QP path uses sparse active-set subproblems. Dense active-set subproblems are available explicitly through QpSolverType::Dense or ActiveSetSolverParams::QpSubproblemSolver::Dense.
import gtsam
import numpy as npProblem Model¶
A QpProblem stores three pieces:
quadratic costs added with
addCost(QpCost)oraddCost(HessianFactor),equality constraints added with
LinearConstraint::Equal(...),inequality constraints added with
LinearConstraint::LessEqual(...)orLinearConstraint::GreaterEqual(...).
The cost is represented by GTSAM Gaussian/Hessian factors. Linear constraints use JacobianFactor rows and are interpreted as expressions of the form A*x - b with the selected relation. Matrix values are supported by column-major vectorization.
Minimal C++ Example¶
using namespace gtsam;
const Key x = Symbol('x', 0);
QpProblem problem;
problem.addCost(HessianFactor(
x, Matrix::Identity(2, 2),
(Vector(2) << 2.0, 3.0).finished(), 13.0));
problem.addConstraint(LinearConstraint::LessEqual(JacobianFactor(
x, Matrix::Identity(2, 2),
(Vector(2) << 1.0, 1.0).finished())));
Values initial;
initial.insert(x, (Vector(2) << 0.0, 0.0).finished());
Values result = problem.optimize(initial);problem.optimize(initial) uses the sparse active-set QP mode by default. If no initial point is supplied, the solver first runs a phase-I LP to find a vector-valued feasible point.
Minimal Python Example¶
from gtsam.symbol_shorthand import X
x = X(0)
model = gtsam.noiseModel.Unit.Create(2)
problem = gtsam.QpProblem()
problem.addCost(gtsam.HessianFactor(x, np.eye(2), np.array([2.0, 3.0]), 13.0))
problem.addConstraint(
gtsam.LinearConstraint.LessEqual(
gtsam.JacobianFactor(x, np.eye(2), np.ones(2), model)
)
)
initial = gtsam.Values()
initial.insert(x, np.zeros(2))
result = problem.optimize(initial, gtsam.QpSolverType.Sparse)
result.atVector(x)array([1., 1.])Solver Choices¶
QpProblem has two convenience solver choices:
Values sparse = problem.optimize(initial, QpSolverType::Sparse);
Values dense = problem.optimize(initial, QpSolverType::Dense);Both choices are active-set QP solvers. They differ only in the equality-constrained QP subproblem solver used inside each active-set iteration.
QpSolverType::Sparseuses sparse working factor graphs. It rebuilds the graph containing the objective and active constraints, then eliminates that graph.QpSolverType::Denseuses dense subproblems. It factors the QP Hessian once, then solves dense Schur systems over the active constraints.
The same choice is available directly on ActiveSetSolver:
auto params = std::make_shared<ActiveSetSolverParams>();
params->qpSubproblemSolver =
ActiveSetSolverParams::QpSubproblemSolver::Dense;
Values result = ActiveSetSolver(problem, params).optimize(initial);Use ActiveSetSolver directly when you need state access or parameter control, and set the QP subproblem mode explicitly.
Dense Versus Sparse Subproblems¶
Both modes follow the same active-set loop:
Solve the equality-constrained QP for the current working set.
If the candidate step violates an inactive inequality, step to the first blocking row and activate it.
If the step is tiny, inspect active multipliers and drop the active inequality with the wrong sign.
Stop when the step is tiny and all active inequality multipliers have the correct sign.
Dense mode changes step 1. It caches a Hessian factorization and computes the active-set KKT solution through a dense Schur complement. Sparse mode solves the same subproblem by building and eliminating a sparse factor graph for the current working set.
Timing Snapshot¶
The following timings were measured on this local build on May 11, 2026 with timing/timeActiveSetSolverComparison. Lower time is better. Sparse / dense below 1 means sparse mode was faster; above 1 means dense mode was faster.
The timing program prints two scenarios:
sparse_chain_cold: a one-dimensional sparse chain QP with one upper-bound inequality every four variables. The benchmark constructs a solver for each solve, which exposes the cost of dense factorization on a naturally sparse problem.quadrotor_allocation_warm: a four-variable motor allocation QP solved repeatedly with a reused solver and previous active set. This is inspired by the Robotics Book section on vectoring using fast rotational dynamics, which maps desired thrust and roll/pitch torques to four motor thrusts. The benchmark adds motor thrust limits and a yaw row.
Sparse Chain¶
| Variables | Inequality rows | Final active rows | Repeats | Dense ms | Sparse ms | Sparse / dense |
|---|---|---|---|---|---|---|
| 32 | 8 | 8 | 25 | 0.528 | 0.333 | 0.631 |
| 64 | 16 | 16 | 25 | 1.252 | 0.477 | 0.381 |
| 128 | 32 | 32 | 10 | 3.942 | 0.868 | 0.220 |
| 256 | 64 | 64 | 10 | 15.920 | 1.871 | 0.118 |
| 512 | 128 | 128 | 3 | 63.709 | 3.962 | 0.062 |
| 1024 | 256 | 256 | 1 | 261.848 | 8.561 | 0.033 |
Sparse working-graph solves are substantially faster for this sparse-chain benchmark.
Dense-Win Quadrotor Allocation¶
Assume normalized rotor thrusts , bounds , arm length , yaw moment scale , and desired wrench . The benchmark solves
The unconstrained allocation requests one rotor above its normalized limit, so one upper-bound row is active at the constrained solution.
| Variables | Inequality rows | Final active rows | Repeats | Dense us | Sparse us | Sparse / dense |
|---|---|---|---|---|---|---|
| 4 | 8 | 1 | 10000 | 2.071 | 7.094 | 3.425 |
Dense mode wins here because the problem is tiny and fully coupled: sparse graph setup dominates the solve, while dense mode reuses the Hessian factorization and solves a one-row active Schur system after warm start.
Practical Guidance¶
Use the default QpProblem::optimize(...) path for sparse active-set QP solving. This is the recommended starting point for naturally sparse chain, grid, SLAM-like, or banded problems.
Use QpSolverType::Dense or ActiveSetSolverParams::QpSubproblemSolver::Dense for small, dense, fixed-structure QPs where reuse matters more than sparsity. Tiny control allocation problems with box constraints, such as the quadrotor motor allocation example above, are a good fit when they are solved repeatedly and warm-started.
Keep sparse mode for larger problems whose Hessian and active constraints retain useful sparsity. If performance matters, benchmark both modes on the real problem structure. The active-set outer loop is shared, so differences usually come from Hessian sparsity, active constraint count, solver construction cost, and how many working-set changes occur before convergence.
State, Multipliers, and Warm Starts¶
optimizeWithState returns the optimized values and an ActiveSetSolverState. Equality and inequality multipliers are grouped by original constraint, with rows in the same order as the input constraint rows. activeInequalityRows stores the flattened row-level active set and can be passed back as a warm start:
ActiveSetSolver solver(problem, params);
auto [firstValues, firstState] = solver.optimizeWithState(initial);
auto [secondValues, secondState] =
solver.optimizeWithState(firstState.values, firstState);Warm starts are most useful when solving related QPs with the same constraint layout.