4. Examples¶
Use the table below to find the example that best matches your formulation.
If your formulation contains |
Start with |
|---|---|
affine objective with linear constraints |
|
convex quadratic objective with affine constraints |
|
PSD matrix variable or semidefinite constraint |
|
norm constraint such as |
|
pure data fitting with squared residuals |
|
squared residuals plus quadratic shrinkage |
|
robust regression with a smooth outlier-resistant loss |
|
smooth loss plus sparsity regularization |
|
hinge-loss classification with sparse coefficients |
|
robust regression with asymmetric residual treatment |
|
low-rank plus sparse matrix decomposition |
|
PSD matrix model with |
|
maximum-entropy distribution under affine constraints |
|
sparse event recovery with box-constrained coefficients |
|
logarithmic utility with a single resource budget |
|
blurred observation with convolution and edge-preserving regularization |
|
quadratic formulation with budget and nonnegativity constraints |
|
exact sparsity penalty through a custom proximal operator |
|
cardinality budget enforced as a hard sparse-set projection |
|
stronger-than-L1 nonconvex sparsity regularization |
|
exact group sparsity applied block by block |
|
explicit low-rank promotion with a custom singular-value thresholding step |
|
hard rank cap enforced by truncated SVD projection |
|
fixed-norm vector structure enforced on the unit sphere |
|
orthonormal-column matrix structure enforced by a Stiefel projection |
|
simplex feasibility modeled through a custom projection operator |
|
binary-valued decision vector modeled through a custom projection |
|
sparse regression with L0 penalty and linear constraints (UDF + constraints) |
|
robust regression with a smooth bounded-gradient loss (gradient UDF) |
|
heavy-tailed robustness with a redescending influence function (gradient UDF) |
|
conditional quantile estimation with a smooth pinball loss (gradient UDF) |
|
precision-focused regression with steeper small-error gradients (gradient UDF) |
|
edge-preserving signal denoising with a differentiable TV penalty (gradient UDF) |
|
GLM for positive-valued data with a Gamma deviance (gradient UDF) |
Core Convex Forms
These examples establish the basic affine, quadratic, and conic formulations used throughout the examples below. Use this group when you want the cleanest entry points for standard convex templates before moving to more application-shaped formulations.
Example |
Main structure |
|---|---|
affine objective, linear inequalities, and nonnegativity constraints |
|
convex quadratic objective with affine equalities and inequalities |
|
PSD matrix variable with affine trace constraints |
|
Euclidean norm constraints coupled with affine structure |
Data Fitting
These examples focus on regression and classification formulations that arise in statistical modeling and machine learning. Use this group when your objective is a sum of residuals or losses, possibly with regularization.
Example |
Main structure |
|---|---|
unconstrained minimization of squared residuals |
|
least squares with L2 shrinkage |
|
robust regression with Huber loss |
|
logistic loss with L1 regularization |
|
hinge loss with sparse coefficients |
|
asymmetric absolute loss for quantile estimation |
Structured Matrix Problems
These examples involve matrix-valued decision variables with structural constraints such as low rank, sparsity, or positivity. Use this group when the decision variable is naturally a matrix and the formulation involves spectral or entrywise structure.
Example |
Main structure |
|---|---|
low-rank plus sparse matrix decomposition |
|
sparse inverse covariance estimation with log-determinant |
Applications
These examples show how convex templates appear in domain-specific contexts such as information theory, signal processing, and finance. Use this group when you want to see how the same mathematical patterns translate into practical models.
Example |
Main structure |
|---|---|
maximum-entropy distribution under affine constraints |
|
sparse event recovery with box constraints |
|
logarithmic utility with a single resource budget |
|
blurred observation with convolution and edge-preserving regularization |
|
mean-variance allocation with budget and nonnegativity constraints |
Examples with User-Defined Proximal Functions
These examples show how to extend ADMM when the modeling pattern is a strong fit but one proximal term is not available as a built-in atom. They cover custom sparsity penalties, low-rank and manifold-style constraints, and projection-based indicators. Most of the nonconvex UDFs below fall outside the disciplined convex programming rules enforced by tools such as CVXPY. In those cases the solver acts as a practical local method, and the result should be interpreted as a locally optimal solution or stationary point rather than a globally optimal one. Use this group when you need a custom proximal block but still want to stay inside the same symbolic modeling workflow.
Example |
Main structure |
|---|---|
exact sparsity penalty via hard thresholding |
|
cardinality budget enforced by sparse-set projection |
|
nonconvex sparsity promotion stronger than L1 |
|
exact block sparsity through groupwise proximal updates |
|
explicit low-rank promotion by singular-value thresholding |
|
hard rank cap enforced by truncated SVD projection |
|
fixed-norm vector feasibility on the unit sphere |
|
orthonormal-column matrix feasibility on the Stiefel manifold |
|
simplex projection for probability-style vectors |
|
binary-valued vector feasibility by coordinatewise projection |
|
combining a UDF with a sensing matrix and linear constraints |
Examples with User-Defined Smooth Functions
These examples demonstrate the grad UDF path: instead of deriving a proximal operator, you
supply only eval (function value) and grad (gradient), and the C++ backend solves the
proximal subproblem automatically via gradient descent with Armijo backtracking line search.
This path is ideal for smooth custom losses where the proximal operator has no convenient closed form — robust losses from statistics, domain-specific objectives from machine learning, GLM deviance functions, and structural penalties like total variation. Each example below is a complete, self-contained formulation that composes the custom loss with standard ADMM atoms and constraints.
Example |
Main structure |
|---|---|
smooth L1 approximation with bounded gradient \(\tanh(r)\) |
|
heavy-tailed loss with redescending influence function |
|
differentiable pinball loss for conditional quantile estimation |
|
precision-focused loss from face landmark localization |
|
differentiable TV penalty with structural (non-elementwise) gradient |
|
GLM with Gamma deviance and log link for positive responses |
For detailed symbol documentation, see the Python API.