4.5. Least Squares

Least squares is the basic model for fitting a linear predictor to observed data. The key idea is the residual, meaning the gap between what the model predicts and what we actually observed.

The standard optimization model is

\[\begin{array}{ll} \min\limits_x & \|A x - b\|_2^2. \end{array}\]

Here \(A \in \mathbb{R}^{m \times n}\) is the design matrix, \(b \in \mathbb{R}^m\) is the observation vector, and \(x \in \mathbb{R}^n\) is the parameter vector we want to estimate.

If the model predicts \(A x\), then the residual is \(r = A x - b\). Least squares makes all of those residual entries small in a single quadratic objective.

Step 1: Generate a synthetic fitting problem

We create a random matrix \(A\), a hidden ground-truth parameter vector \(x_{\text{true}}\), and then build observations \(b = A x_{\text{true}} + \text{noise}\). That gives us data with a clear linear signal plus a small amount of measurement noise.

import numpy as np
import admm

np.random.seed(1)

m = 40
n = 12
A = np.random.randn(m, n)
x_true = np.random.randn(n)
b = A @ x_true + 0.1 * np.random.randn(m)

Step 2: Create the model and decision variable

The unknown quantity is the coefficient vector \(x\). In the symbolic API, admm.Var("x", n) creates a length-\(n\) vector variable.

model = admm.Model()
x = admm.Var("x", n)

Step 3: Write the residual expression and objective

The prediction produced by the current candidate vector is A @ x. Subtracting the observed data b gives the residual vector.

residual = A @ x - b
model.setObjective(admm.sum(admm.square(residual)))

The first line is the code version of \(r = A x - b\). The second line squares those residual entries and sums them, which is exactly the symbolic form of minimizing \(\|A x - b\|_2^2\).

Step 4: Add constraints

This least-squares example has no explicit constraints, so there are no model.addConstr(...) calls. Once the objective is set, the model is fully specified.

Step 5: Solve and inspect the result

After optimizing, we print the standard solver outputs.

model.optimize()

print(" * model.ObjVal: ", model.ObjVal)        # Expected: 0.2947794914868591
print(" * model.StatusString: ", model.StatusString)  # Expected: SOLVE_OPT_SUCCESS

Complete runnable example:

import numpy as np
import admm

np.random.seed(1)

m = 40
n = 12
A = np.random.randn(m, n)
x_true = np.random.randn(n)
b = A @ x_true + 0.1 * np.random.randn(m)

model = admm.Model()
x = admm.Var("x", n)
model.setObjective(admm.sum(admm.square(A @ x - b)))
model.optimize()

print(" * model.ObjVal: ", model.ObjVal)        # Expected: 0.2947794914868591
print(" * model.StatusString: ", model.StatusString)  # Expected: SOLVE_OPT_SUCCESS

This example is available as a standalone script in the examples/ folder of the ADMM repository:

python examples/least_squares.py

The solution \(x\) minimizes \(\|Ax - b\|_2^2\) — one line of ADMM code for the objective, no manual normal-equation derivation needed.