ParaMonte Fortran 2.0.0
Parallel Monte Carlo and Machine Learning Library
See the latest version documentation.
pm_optimization Module Reference

This module contains procedures, generic interfaces, and types for numerical optimizations of mathematical functions. More...

Data Types

interface  getMinBrent
 Generate and return the minimum value and the corresponding abscissa xmin of the input 1-dimensional function isolated to a fractional precision of about tol using the Brent method.
More...
 
interface  isBracketMax
 Generate and return .true. if and only if a concave quadratic curve can fit the specified input triple [xmin, xlow, xupp] and the function value at the middle pointxmin` is larger than both boundary point function values.
More...
 
interface  isBracketMin
 Generate and return .true. if and only if a convex quadratic curve can fit the specified input triple [xmin, xlow, xupp] and the function value at the middle point xmin is smaller than both boundary point function values.
More...
 
interface  isFailedMinPowell
 Generate and return .true. if and only if the algorithm fails to find the minimum value and the corresponding abscissa xmin(1:ndim) of the input arbitrary (ndim) dimensional-support function isolated to a fractional precision of about tol using the Powell unconstrained derivative-free minimization method.
More...
 
interface  setBracketMax
 Refine an initial input interval such that the final returned interval is guaranteed to contain the maximum of the user-specified input function.
More...
 
interface  setBracketMin
 Refine an initial input interval such that the final returned interval is guaranteed to contain the minimum of the user-specified input function.
More...
 
interface  setMinBrent
 Compute and return the minimum value and the corresponding abscissa xmin of the input 1-dimensional function isolated to a fractional precision of about tol using the Brent method.
More...
 
interface  setMinPowell
 Compute and return the minimum value and the corresponding abscissa xmin(1:ndim) of the input arbitrary (ndim) dimensional-support function isolated to a fractional precision of about tol using the Powell unconstrained derivative-free minimization method.
More...
 

Variables

character(*, SK), parameter MODULE_NAME = "@pm_optimization"
 

Detailed Description

This module contains procedures, generic interfaces, and types for numerical optimizations of mathematical functions.

The following methods are currently included in this module:

  1. setBracketMin, creates an interval triplet guaranteed to contain the minimum of convex function.
  2. setBracketMax, creates an interval triplet guaranteed to contain the maximum of convex function.
  3. isBracketMin, tests if an interval triplet contains the minimum of convex function.
  4. isBracketMax, tests if an interval triplet contains the maximum of convex function.
  5. getMinBrent and setMinBrent perform unconstrained derivative-free minimization of scalar univariate functions.
    The Brent method is a hybrid iterative minimization algorithm combining the bisection method, the secant method and inverse quadratic interpolation.
    It has the reliability of bisection or Golden Section methods but it can be as quick as some of the less-reliable methods.
    The algorithm tries to use the potentially fast-converging secant method or inverse quadratic interpolation if possible, but it falls back to the more robust bisection method if necessary.
    The Brent method is due to Richard Brent and builds on an earlier algorithm by Theodorus Dekker.
    Consequently, the method is also known as the Brent–Dekker method.
  6. isFailedMinPowell and setMinPowell perform unconstrained derivative-free minimization of multivariate functions.
    The Powell method, strictly the Powell conjugate direction method, is an algorithm proposed by Michael J. D. Powell for finding a local minimum of a function.
    The function need not be differentiable, and no derivatives are taken.
    The function must be a real-valued function of a fixed number of real-valued inputs.
    The caller passes in the initial point.
    The caller also passes in a set of initial search vectors.
    Typically \(N\) search vectors \(\{s_{1}, \dots, s_{N}\}\) are passed in, which are simply the normal vectors aligned to each axis.
    The method minimizes the function by a bi-directional search along each search vector, in turn.
    The bi-directional line search along each search vector can be done by the Golden-section search or the Brent method.
    Let the minima found during each bi-directional line search be \(\{x_{0}+\alpha_{1}s_{1},{x}_{0} + \sum_{i=1}^{2}\alpha_{i}{s}_{i},\dots ,{x}_{0}+\sum_{i=1}^{N}\alpha_{i}{s}_{i}\}\), where \(x_0\) is the initial starting point and \(\alpha_i\) is the scalar determined during bi-directional search along \(s_i\).
    The new position \(x_{1}\) can then be expressed as a linear combination of the search vectors, i.e., \(x_{1} = x_{0} + \sum_{i=1}^{N} \alpha_{i}s_{i}\).
    The new displacement vector \(\sum_{i = 1}^{N} \alpha_{i}s_{i}\) becomes a new search vector, and is added to the end of the search vector list.
    Meanwhile, the search vector which contributed most to the new direction, i.e. the one which was most successful \(i_{d} = \arg \max_{i = 1}^{N} | \alpha_{i} | \| s_{i} \|\), is deleted from the search vector list.
    The new set of \(N\) search vectors is \(\{s_{1}, \dots , s_{i_{d} - 1}, s_{i_{d} + 1}, \dots , s_{N}, \sum_{i=1}^{N} \alpha_{i}s_{i}\}\).
    The algorithm iterates an arbitrary number of times until no significant improvement is made.
    The method is useful for calculating the local minimum of a continuous but complex function, especially one without an underlying mathematical definition, because it is not necessary to take derivatives.
    The basic algorithm is simple and the complexity is in the linear searches along the search vectors, which can be achieved via the Brent method.
Test:
test_pm_optimization
Todo:
High Priority: The Nedler-Mead algorithm must be implemented.


Final Remarks


If you believe this algorithm or its documentation can be improved, we appreciate your contribution and help to edit this page's documentation and source file on GitHub.
For details on the naming abbreviations, see this page.
For details on the naming conventions, see this page.
This software is distributed under the MIT license with additional terms outlined below.

  1. If you use any parts or concepts from this library to any extent, please acknowledge the usage by citing the relevant publications of the ParaMonte library.
  2. If you regenerate any parts/ideas from this library in a programming environment other than those currently supported by this ParaMonte library (i.e., other than C, C++, Fortran, MATLAB, Python, R), please also ask the end users to cite this original ParaMonte library.

This software is available to the public under a highly permissive license.
Help us justify its continued development and maintenance by acknowledging its benefit to society, distributing it, and contributing to it.

Author:
Amir Shahmoradi, Tuesday March 7, 2017, 3:50 AM, Institute for Computational Engineering and Sciences (ICES), The University of Texas Austin

Variable Documentation

◆ MODULE_NAME

character(*, SK), parameter pm_optimization::MODULE_NAME = "@pm_optimization"

Definition at line 87 of file pm_optimization.F90.