5.1. Global functions

ADMM Python SDK global functions.

Global functions can be applied to expression or constant.

For example:

>>> print(inrange(numpy.arange(9).reshape(3, 3), 3, 5))
[[inf inf inf]
[ 0.  0.  0.]
[inf inf inf]]
>>> print(inrange(Constant(range(9)).reshape(3, 3), 3, 5))
[[inf inf inf]
[ 0.  0.  0.]
[inf inf inf]]
>>> print(inrange(Var(3, 3), 3, 5))
Expr((3, 3), inrange('Var', 'ndarray', 'ndarray'))

For constant input argument, global function returns constant, otherwise it returns expression.

Besides modeling, these functions can also be used to do matrix operations. In this case, it is an alternative to numpy (obviously with less functionality).

The following example demonstrates how to do edge detection for an input image with global functions.

# Load/save image with python Pillow
image = Constant(Image.open(sys.argv[1]).convert('L'))
sobel = Constant([[-1, 0, 1], [-2, 0, 2], [-1, 0, 1]])

x = corr2d(image, sobel, 'same')
y = corr2d(image, sobel.T, 'same')

result = sqrt(square(x) + square(y))

# Convert Constant to numpy uint8 array
resimg = Image.fromarray(result.asDense().data.astype('B'))
resimg.save("_{}".format(sys.argv[1]))

5.1.1. Functions

The following table lists all available global functions:

abs()

Get the absolute value for an expression.

exp()

Return e ^ x.

log()

Return log(x).

entropy()

Return a tensor, each element indicates the result of x * log(x).

norm()

Compute the norm for an expression or constant.

diag()

For matrix, retrieve the elements in main diagonal as a vector.

square()

Compute x ^ 2 elementwise.

sqrt()

Compute x ^ 1/2 elementwise.

log_det()

Return a scalar which indicates the log-determinant result of a matrix.

trace()

Return a scalar which indicates the sum of the diagonal entries of a matrix.

max()

Return the maximum element.

min()

Return the minimum element.

sum()

Return the sum of all elements.

tv1d()

1D total variation.

tv2d()

2D total variation.

maximum()

Element-wise maximum.

minimum()

Element-wise minimum.

power()

Element-wise power.

logistic()

Logistic function.

huber()

Huber function.

bathtub()

Bathtub function.

squared_bathtub()

Squared bathtub function.

kl_div()

KL divergence.

conv2d()

2D convolution.

corr2d()

2D correlation.

inrange()

Check if elements are in range.

hstack()

Stack expressions horizontally (column wise).

vstack()

Stack expressions vertically (row wise).

scalene()

Asymmetric linear function (different slopes for positive and negative).

vapnik()

Vapnik loss: max(norm(x, 2) - epsilon, 0).

squared_hinge()

Squared hinge loss: max(1 - x, 0)^2.

abs(x: TensorLike | ArrayLike)

Get the absolute value for an expression.

param x:

Type: Union[TensorLike, ArrayLike]

The expression.

return:

Type: Union[Expr, Constant]

The absolute value.

bathtub(x: TensorLike | ArrayLike, delta: ArrayLike = 1)

Return a tensor, each element indicates the result of w-insensitive loss function.

param x:

Type: Union[TensorLike, ArrayLike]

The expression to which the w-insensitive loss function will be applied.

param delta:

Type: ArrayLike = 1

The threshold of w-insensitive loss function.

return:

Type: Union[Expr, Constant]

The result tensor.

conv2d(x: TensorLike | ArrayLike, k: ArrayLike, mode: str = 'same')

Perform 2d convolution.

param x:

Type: Union[TensorLike, ArrayLike]

The expression, should be a matrix.

param k:

Type: ArrayLike

Another matrix.

param mode:

Type: str = ‘same’

Padding strategy:

Assume x has shape (m, n) , k has (p, q)

  • ‘full’: Fill with 0s, returns all elements. Result shape is (m + p - 1, n + q - 1) .

  • ‘same’: Result shape is the same size as x , it is the sub matrix centered with respect to the ‘full’ output.

  • ‘valid’: Return result containing only elements that do not rely on the padding. Result shape is (max(m, p) - min(m, p) + 1, max(n, q) - min(n, q) + 1) .

return:

Type: Union[Expr, Constant]

The result matrix.

Note

If mode is ‘valid’, shapes for x(m, n) and k(p, q) should satisfy:

(m - p) * (n - q) >= 0

That is, one matrix can be placed into another.

corr2d(x: TensorLike | ArrayLike, k: ArrayLike, mode: str = 'same')

Compute cross-correlation. In some machine learning frameworks, convolution operator is actually cross-correlation (e.g. tensorflow.nn.conv2d).

param x:

Type: Union[TensorLike, ArrayLike]

The expression, should be a matrix.

param k:

Type: ArrayLike

Another matrix.

param mode:

Type: str = ‘same’

Padding strategy:

Assume x has shape (m, n) , k has (p, q)

  • ‘full’: Fill with 0s, returns all elements. Result shape is (m + p - 1, n + q - 1) .

  • ‘same’: Result shape is the same size as x , it is the sub matrix centered with respect to the ‘full’ output.

  • ‘valid’: Return result containing only elements that do not rely on the padding. Result shape is (max(m, p) - min(m, p) + 1, max(n, q) - min(n, q) + 1) .

return:

Type: Union[Expr, Constant]

The result matrix.

Note

If mode is ‘valid’, shapes for x(m, n) and k(p, q) should satisfy:

(m - p) * (n - q) >= 0

That is, one matrix can be placed into another.

diag(x: TensorLike | ArrayLike)

For matrix, retrieve the elements in main diagonal as a vector. In this case it should be a square matrix.

For vector, construct a diagonal matrix with this vector.

param x:

Type: Union[TensorLike, ArrayLike]

The vector or matrix.

return:

Type: Union[Expr, Constant]

The result matrix or vector.

entropy(x: TensorLike | ArrayLike)

Return a tensor, each element indicates the result of x * log(x).

>>> c = numpy.square(numpy.random.randn(10))
>>> scipy.stats.entropy(c) + numpy.sum(entropy(c / numpy.sum(c)))
0.0
param x:

Type: Union[TensorLike, ArrayLike]

The expression which requires each element to be greater than 0.

return:

Type: Union[Expr, Constant]

The result tensor.

Note

Standard Shannon entropy: -sum(x * log(x)), that is: -sum(entropy(x))

exp(x: TensorLike | ArrayLike)

Return e ^ x.

param x:

Type: Union[TensorLike, ArrayLike]

The expression for exponent.

return:

Type: Union[Expr, Constant]

The result expression.

hstack(*tensors: TensorLike | ArrayLike)

Stack expressions in sequence horizontally (column wise).

param *tensors:

Type: Union[TensorLike, ArrayLike]

The expressions to stack.

return:

Type: Union[Expr, Constant]

The result expression.

huber(x: TensorLike | ArrayLike, delta: ArrayLike = 1)

Return a tensor, each element indicates the result of huber function.

param x:

Type: Union[TensorLike, ArrayLike]

The expression to which the huber function will be applied.

param delta:

Type: ArrayLike = 1

The threshold of huber function.

return:

Type: Union[Expr, Constant]

The result tensor.

inrange(x: TensorLike | ArrayLike, lb: ArrayLike, ub: ArrayLike)

Return a tensor, contains only inf and 0 , each element indicates whether an element in x is in range [lb, ub].

param x:

Type: Union[TensorLike, ArrayLike]

The expression to be tested.

param lb:

Type: ArrayLike

The lower bound constant. It can be broadcasted.

param ub:

Type: ArrayLike

The upper bound constant. It can be broadcasted.

return:

Type: Union[Expr, Constant]

The indicator tensor.

kl_div(p: TensorLike | ArrayLike, q: TensorLike | ArrayLike)

Return a tensor, each element indicates the result of p * log(p / q). The sum of the result represents the Kullback-Leibler (KL) divergence between p and q.

>>> p = numpy.square(numpy.random.randn(10))
>>> q = numpy.square(numpy.random.randn(10))
>>> pnorm = p / numpy.sum(p)
>>> qnorm = q / numpy.sum(q)
>>> scipy.stats.entropy(p, q) - numpy.sum(kl_div(pnorm, qnorm))
0.0
param p:

Type: Union[TensorLike, ArrayLike]

The expression p which requires each element to be greater than 0.

param q:

Type: Union[TensorLike, ArrayLike]

The expression q which requires each element to be greater than 0.

return:

Type: Union[Expr, Constant]

The result tensor.

Note

Standard Kullback-Leibler divergence (relative entropy) is: sum(p * log(p / q)), that is: sum(kl_div(p, q))

log(x: TensorLike | ArrayLike)

Return log(x).

param x:

Type: Union[TensorLike, ArrayLike]

The argument for log.

return:

Type: Union[Expr, Constant]

The result.

log_det(x: TensorLike | ArrayLike)

Return a scalar which indicates the log-determinant result of a matrix.

param x:

Type: Union[TensorLike, ArrayLike]

The expression which should be a square matrix.

return:

Type: Union[Expr, Constant]

The result scalar. Returns inf if x is not a strictly positive definite matrix.

logistic(x: TensorLike | ArrayLike, b: ArrayLike = 1)

Return a tensor, each element indicates the result of log(e ^ x + b).

param x:

Type: Union[TensorLike, ArrayLike]

The input expression x .

param b:

Type: ArrayLike = 1

The tensor parameter b in logistic function.

return:

Type: Union[Expr, Constant]

The result tensor.

max(x: TensorLike | ArrayLike)

Return a scalar which indicates the largest item in the tensor.

param x:

Type: Union[TensorLike, ArrayLike]

The expression to which the maximum function will be applied.

return:

Type: Union[Expr, Constant]

The result scalar.

maximum(x: TensorLike | ArrayLike, b: ArrayLike)

Return a new tensor, each element of new tensor is the larger of the corresponding elements in x and b .

param x:

Type: Union[TensorLike, ArrayLike]

The expression.

param b:

Type: ArrayLike

The lower bound tensor constant.

return:

Type: Union[Expr, Constant]

The result expression.

min(x: TensorLike | ArrayLike)

Return a scalar which indicates the least item in the tensor.

param x:

Type: Union[TensorLike, ArrayLike]

The expression to which the minimum function will be applied.

return:

Type: Union[Expr, Constant]

The result scalar.

minimum(x: TensorLike | ArrayLike, b: ArrayLike)

Return a new tensor, each element of new tensor is the smaller of the corresponding elements in x and b .

param x:

Type: Union[TensorLike, ArrayLike]

The expression.

param b:

Type: ArrayLike

The upper bound tensor constant.

return:

Type: Union[Expr, Constant]

The result expression.

norm(x: TensorLike | ArrayLike, ord: int | str = None)

Compute the norm for an expression or constant.

param x:

Type: Union[TensorLike, ArrayLike]

The expression or constant.

param ord:

Type:Union[int, str] = None

The norm level:

  • None - level 2 norm for vector and Frobenius norm for matrix.

  • 1 - for level 1 norm. It is, sum(abs(x)) for vector, and for matrix, max column sum of absolute values.

  • 2 - for level 2 norm. It is, sqrt(sum(x^2)) for vector, and largest singular value for matrix.

  • inf - for inf norm. It is, max(abs(x)) for vector, and max row sum of absolute values for matrix.

  • ‘fro’ - for frobenius norm. It is, sqrt(sum(x^2)) for matrix.

  • ‘nuc’ - for nuclear norm. It is, sum for all singular values.

return:

Type: Union[Expr, Constant]

The scalar norm value.

power(x: TensorLike | ArrayLike, p: ArrayLike)

Return a tensor, each element indicates the result of an operation to raise p to the power separately.

param x:

Type: Union[TensorLike, ArrayLike]

The expression x . When 0 < p < 1 , every element of x is required to be greater than or equal to 0.

param p:

Type: ArrayLike

The scalar parameter p requires greater than 0.

return:

Type: Union[Expr, Constant]

The result tensor.

scalene(x: TensorLike | ArrayLike, a: ArrayLike = -1, b: ArrayLike = 1)

Return a new tensor, each element of which depends on the corresponding element of x

  1. when x<0, element = ax

  2. when x=0, element = 0

  3. when x>0, element = bx

param x:

Type: Union[TensorLike, ArrayLike]

The expression.

param a:

Type: ArrayLike = -1

The coefficient for negative entries.

param b:

Type: ArrayLike = 1

The coefficient for positive entries.

return:

Type: Union[Expr, Constant]

The result expression.

sqrt(x: TensorLike | ArrayLike)

Compute x ^ 1/2 elementwise.

param x:

Type: Union[TensorLike, ArrayLike]

The source tensor.

return:

Type: Union[Expr, Constant]

The result tensor.

square(x: TensorLike | ArrayLike)

Compute x ^ 2 elementwise.

param x:

Type: Union[TensorLike, ArrayLike]

The source tensor.

return:

Type: Union[Expr, Constant]

The result tensor.

squared_bathtub(x: TensorLike | ArrayLike, delta: ArrayLike = 1)

Return a tensor, each element indicates the result of squared w-insensitive loss function.

param x:

Type: Union[TensorLike, ArrayLike]

The expression to which the squared w-insensitive loss will be applied.

param delta:

Type: ArrayLike = 1

The threshold of squared w-insensitive loss function.

return:

Type: Union[Expr, Constant]

The result tensor.

squared_hinge(x: TensorLike | ArrayLike)

Compute max(1 - x, 0) ^ 2 elementwise.

param x:

Type: Union[TensorLike, ArrayLike]

The source tensor.

return:

Type: Union[Expr, Constant]

The result tensor.

sum(x: TensorLike | ArrayLike)

Compute the summation of all elements in tensor.

param x:

Type: Union[TensorLike, ArrayLike]

The source tensor.

return:

Type: Union[Expr, Constant]

The result scalar.

trace(x: TensorLike | ArrayLike)

Return a scalar which indicates the sum of the diagonal entries of a matrix.

param x:

Type: Union[TensorLike, ArrayLike]

The expression.

return:

Type: Union[Expr, Constant]

The result scalar.

tv1d(x: TensorLike | ArrayLike, w: ArrayLike = 1, p: int = 1)

One dimensional total variation.

param x:

Type: Union[TensorLike, ArrayLike]

The expression, should be a vector.

param w:

Type: ArrayLike = 1

The weights, can be broadcasted.

param p:

Type: int = 1

The norm level.

return:

Type: Union[Expr, Constant]

The result scalar.

tv2d(x: TensorLike | ArrayLike, p: int = 1)

Return total variation for matrix.

param x:

Type: Union[TensorLike, ArrayLike]

The expression, should be a matrix.

param p:

Type: int = 1

The norm level, supports 1 or 2.

return:

Type: Union[Expr, Constant]

The result scalar.

vapnik(x: TensorLike | ArrayLike, epsilon: ArrayLike = 0)

Return Vapnik loss: max(norm(x, 2) - epsilon, 0).

param x:

Type: Union[TensorLike, ArrayLike]

The expression.

param epsilon:

Type: ArrayLike = 0

The threshold.

return:

Type: Union[Expr, Constant]

The result scalar.

vstack(*tensors: TensorLike | ArrayLike)

Stack expressions in sequence vertically (row wise).

param *tensors:

Type: Union[TensorLike, ArrayLike]

The expressions to stack.

return:

Type: Union[Expr, Constant]

The result expression.