4. Examples

Use the table below to find the example that best matches your formulation.

Model pattern lookup

If your formulation contains

Start with

affine objective with linear constraints

Linear Program

convex quadratic objective with affine constraints

Quadratic Program

PSD matrix variable or semidefinite constraint

Semidefinite Program

norm constraint such as admm.norm(A @ x + b, ord=2) <= c.T @ x + d

Second-Order Cone Program

pure data fitting with squared residuals

Least Squares

squared residuals plus quadratic shrinkage

Ridge Regression

robust regression with a smooth outlier-resistant loss

Huber Regression

smooth loss plus sparsity regularization

Sparse Logistic Regression

hinge-loss classification with sparse coefficients

SVM (L1)

robust regression with asymmetric residual treatment

Quantile Regression

low-rank plus sparse matrix decomposition

Robust PCA

PSD matrix model with log_det() and sparsity

Sparse Inverse Covariance

maximum-entropy distribution under affine constraints

Entropy Maximization

sparse event recovery with box-constrained coefficients

Fault Detection

logarithmic utility with a single resource budget

Water Filling

blurred observation with convolution and edge-preserving regularization

Image Deblurring

quadratic formulation with budget and nonnegativity constraints

Portfolio Optimization

exact sparsity penalty through a custom proximal operator

L0 Norm

cardinality budget enforced as a hard sparse-set projection

L0 Ball Indicator

stronger-than-L1 nonconvex sparsity regularization

L1/2 Quasi-Norm

exact group sparsity applied block by block

Group Sparsity

explicit low-rank promotion with a custom singular-value thresholding step

Matrix Rank Function

hard rank cap enforced by truncated SVD projection

Rank-r Indicator

fixed-norm vector structure enforced on the unit sphere

The Unit-Sphere Indicator

orthonormal-column matrix structure enforced by a Stiefel projection

The Stiefel-Manifold Indicator

simplex feasibility modeled through a custom projection operator

The Simplex Indicator

binary-valued decision vector modeled through a custom projection

The Binary Indicator

sparse regression with L0 penalty and linear constraints (UDF + constraints)

L0-Regularized Regression

robust regression with a smooth bounded-gradient loss (gradient UDF)

Log-Cosh Robust Regression

heavy-tailed robustness with a redescending influence function (gradient UDF)

Cauchy Loss Robust Regression

conditional quantile estimation with a smooth pinball loss (gradient UDF)

Smooth Quantile Regression

precision-focused regression with steeper small-error gradients (gradient UDF)

Wing Loss Regression

edge-preserving signal denoising with a differentiable TV penalty (gradient UDF)

Smooth Total Variation Denoising

GLM for positive-valued data with a Gamma deviance (gradient UDF)

Gamma Regression

Core Convex Forms

These examples establish the basic affine, quadratic, and conic formulations used throughout the examples below. Use this group when you want the cleanest entry points for standard convex templates before moving to more application-shaped formulations.

Examples in this group

Example

Main structure

Linear Program

affine objective, linear inequalities, and nonnegativity constraints

Quadratic Program

convex quadratic objective with affine equalities and inequalities

Semidefinite Program

PSD matrix variable with affine trace constraints

Second-Order Cone Program

Euclidean norm constraints coupled with affine structure

Data Fitting

These examples focus on regression and classification formulations that arise in statistical modeling and machine learning. Use this group when your objective is a sum of residuals or losses, possibly with regularization.

Examples in this group

Example

Main structure

Least Squares

unconstrained minimization of squared residuals

Ridge Regression

least squares with L2 shrinkage

Huber Regression

robust regression with Huber loss

Sparse Logistic Regression

logistic loss with L1 regularization

SVM (L1)

hinge loss with sparse coefficients

Quantile Regression

asymmetric absolute loss for quantile estimation

Structured Matrix Problems

These examples involve matrix-valued decision variables with structural constraints such as low rank, sparsity, or positivity. Use this group when the decision variable is naturally a matrix and the formulation involves spectral or entrywise structure.

Examples in this group

Example

Main structure

Robust PCA

low-rank plus sparse matrix decomposition

Sparse Inverse Covariance

sparse inverse covariance estimation with log-determinant

Applications

These examples show how convex templates appear in domain-specific contexts such as information theory, signal processing, and finance. Use this group when you want to see how the same mathematical patterns translate into practical models.

Examples in this group

Example

Main structure

Entropy Maximization

maximum-entropy distribution under affine constraints

Fault Detection

sparse event recovery with box constraints

Water Filling

logarithmic utility with a single resource budget

Image Deblurring

blurred observation with convolution and edge-preserving regularization

Portfolio Optimization

mean-variance allocation with budget and nonnegativity constraints

Examples with User-Defined Proximal Functions

These examples show how to extend ADMM when the modeling pattern is a strong fit but one proximal term is not available as a built-in atom. They cover custom sparsity penalties, low-rank and manifold-style constraints, and projection-based indicators. Most of the nonconvex UDFs below fall outside the disciplined convex programming rules enforced by tools such as CVXPY. In those cases the solver acts as a practical local method, and the result should be interpreted as a locally optimal solution or stationary point rather than a globally optimal one. Use this group when you need a custom proximal block but still want to stay inside the same symbolic modeling workflow.

Examples in this group

Example

Main structure

L0 Norm

exact sparsity penalty via hard thresholding

L0 Ball Indicator

cardinality budget enforced by sparse-set projection

L1/2 Quasi-Norm

nonconvex sparsity promotion stronger than L1

Group Sparsity

exact block sparsity through groupwise proximal updates

Matrix Rank Function

explicit low-rank promotion by singular-value thresholding

Rank-r Indicator

hard rank cap enforced by truncated SVD projection

The Unit-Sphere Indicator

fixed-norm vector feasibility on the unit sphere

The Stiefel-Manifold Indicator

orthonormal-column matrix feasibility on the Stiefel manifold

The Simplex Indicator

simplex projection for probability-style vectors

The Binary Indicator

binary-valued vector feasibility by coordinatewise projection

L0-Regularized Regression

combining a UDF with a sensing matrix and linear constraints

Examples with User-Defined Smooth Functions

These examples demonstrate the grad UDF path: instead of deriving a proximal operator, you supply only eval (function value) and grad (gradient), and the C++ backend solves the proximal subproblem automatically via gradient descent with Armijo backtracking line search.

This path is ideal for smooth custom losses where the proximal operator has no convenient closed form — robust losses from statistics, domain-specific objectives from machine learning, GLM deviance functions, and structural penalties like total variation. Each example below is a complete, self-contained formulation that composes the custom loss with standard ADMM atoms and constraints.

Examples in this group

Example

Main structure

Log-Cosh Robust Regression

smooth L1 approximation with bounded gradient \(\tanh(r)\)

Cauchy Loss Robust Regression

heavy-tailed loss with redescending influence function

Smooth Quantile Regression

differentiable pinball loss for conditional quantile estimation

Wing Loss Regression

precision-focused loss from face landmark localization

Smooth Total Variation Denoising

differentiable TV penalty with structural (non-elementwise) gradient

Gamma Regression

GLM with Gamma deviance and log link for positive responses

For detailed symbol documentation, see the Python API.