3.7. Constraints¶
Constraints are added with Model.addConstr(). In practice, models use a mix
of affine equalities and inequalities, norm-ball constraints, and cone constraints.
Use explicit constraints for relationships between expressions.
For intrinsic structure such as nonnegativity, symmetry, or PSD / NSD form, it is often cleaner to declare the property directly on the variable as described in Variables.
3.7.1. How to Choose the Direct Form¶
When you translate math into code, this teaching rule keeps models readable:
If a property belongs to one variable by design, prefer a variable attribute.
If it relates multiple expressions, add it with
Model.addConstr().Write the constraint in the same form you would say out loud.
For example, prefer admm.norm(x, ord=2) <= r over a more opaque algebraic rewrite.
The library can lower several equivalent forms, but the direct expression is usually
easier to read, review, and debug.
Pattern |
Example |
|---|---|
affine equality or inequality |
|
aggregate linear constraint |
|
elementwise bounds |
|
L2 / SOC norm ball |
|
squared L2 norm ball |
|
other norm balls |
|
semidefinite cone |
|
negative semidefinite cone |
|
3.7.2. Walkthrough: Projecting Onto a Feasible Set¶
The example below shows several common constraint families working together in one model.
The idea is simple: choose a vector x and a symmetric matrix X that stay as close
as possible to chosen targets, while also satisfying a collection of feasibility rules.
Read the model in three parts:
the objective measures distance to
x_targetandX_targetthe vector constraints keep
xnonnegative and inside L1 and L2 norm ballsthe matrix constraints keep
Xinside nuclear-norm, Frobenius-norm, and PSD cones
Because the targets are not automatically feasible, solving the model effectively projects them back onto the feasible set described by the constraints.
Complete runnable example:
The complete model can be written as
where \(x\) is the vector variable, \(X\) is the symmetric matrix variable, and \(x_{\text{target}}\), \(X_{\text{target}}\) are the target values we would like to match as closely as feasibility allows.
import admm
import numpy as np
x_target = np.array([2.0, 1.0])
X_target = np.array([[2.0, 0.0], [0.0, 1.0]])
model = admm.Model()
x = admm.Var("x", 2)
X = admm.Var("X", 2, 2, symmetric=True)
model.setObjective(
admm.sum(admm.square(x - x_target))
+ admm.sum(admm.square(X - X_target))
)
model.addConstr(x >= 0)
model.addConstr(admm.norm(x, ord=1) <= 1.2)
model.addConstr(admm.norm(x, ord=2) <= 1.0)
model.addConstr(admm.norm(X, ord='nuc') <= 1.1)
model.addConstr(admm.norm(X, ord='fro') <= 1.0)
model.addConstr(X >> 0)
model.optimize()
print(" * model.StatusString: ", model.StatusString) # Expected: SOLVE_OPT_SUCCESS
print(" * model.ObjVal: ", round(model.ObjVal, 6)) # Expected: around 3.462854
print(" * x.X: ", np.round(np.asarray(x.X), 6)) # Expected: [0.974155 0.225845]
print(" * X.X: ", np.round(np.asarray(X.X), 6)) # Expected: [[0.99441 0. ] [0. 0.10559]]
This example is available as a standalone script in the examples/ folder of the ADMM repository:
python examples/constraints_projection.py
This is a projection problem: find the point nearest to \((x_{\text{target}}, X_{\text{target}})\) inside the feasible set defined by all constraints simultaneously. The solution generally differs from the targets because multiple constraints are active at once.
In practice, write constraints in their natural form — admm.norm(x, ord=2) <= r,
PSD=True — so the code mirrors the math directly.
For the broader formulation boundary, see Supported Problem Structure.