ard.utils.mathematics#
Functions
|
Non-overflowing version of Smooth Max function (see ref 3 and 4 below). |
|
Finds smooth min using the smooth_max function |
|
Smooth version of the Frobenius, or 2, norm. |
|
Vectorized version of smooth_norm. |
- ard.utils.mathematics.smooth_max(x, s=1000.0)[source]#
Non-overflowing version of Smooth Max function (see ref 3 and 4 below). Calculates the smoothmax (a.k.a. softmax or LogSumExponential) of the elements in x.
Based on implementation in BYU FLOW Lab's FLOWFarm software at (1) byuflowlab/FLOWFarm.jl which is based on John D. Cook's writings at (2) https://www.johndcook.com/blog/2010/01/13/soft-maximum/ and (3) https://www.johndcook.com/blog/2010/01/20/how-to-compute-the-soft-maximum/ And based on article in FeedlyBlog (4) https://blog.feedly.com/tricks-of-the-trade-logsumexp/
- Return type:
float- Parameters:
x (list) -- list of values to be compared
s (float, optional) -- alpha for smooth max function. Defaults to 1000.0. Larger values of s lead to more accurate results, but reduce the smoothness of the function.
- Returns:
the smooth max of the provided x list
- Return type:
float
- ard.utils.mathematics.smooth_min(x, s=1000.0)[source]#
Finds smooth min using the smooth_max function
- Return type:
float- Parameters:
x (list) -- list of values to be compared
s (float, optional) -- alpha for smooth min function. Defaults to 1000.0. Larger values of s lead to more accurate results, but reduce the smoothness of the function.
- Returns:
the smooth min of the provided x list
- Return type:
float
- ard.utils.mathematics.smooth_norm(vec, buf=1e-12)[source]#
Smooth version of the Frobenius, or 2, norm. This version is nearly equivalent to the 2-norm with the maximum absolute error corresponding to the order of the buffer value. The maximum error in the gradient is near unity, but the error in the gradient is generally about twice the error in the absolute value. The key benefit of the smooth norm is that it is differentiable at 0.0, while the standard norm is undefined at 0.0.
- Return type:
float- Parameters:
vec (np.ndarray) -- input vector to be normed
buf (float, optional) -- buffer value included in the sum of squares part of the norm. Defaults to 1E-12.
- Returns:
normed result
- Return type:
(float)
- ard.utils.mathematics.smooth_norm_vec(vec, buf=1e-12)#
Vectorized version of smooth_norm. Takes similar arguments as smooth_norm but with additional array axes over which smooth_norm is mapped.
Original documentation:
- Return type:
float- Parameters:
vec (ndarray)
buf (float)
- Smooth version of the Frobenius, or 2, norm. This version is nearly equivalent to the 2-norm with the
maximum absolute error corresponding to the order of the buffer value. The maximum error in the gradient is near unity, but the error in the gradient is generally about twice the error in the absolute value. The key benefit of the smooth norm is that it is differentiable at 0.0, while the standard norm is undefined at 0.0.
- Args:
vec (np.ndarray): input vector to be normed buf (float, optional): buffer value included in the sum of squares part of the norm. Defaults to 1E-12.
- Returns:
(float): normed result