Convex minimization, a subfield of optimization,
studies the problem of minimizing convex functions(a function whose graph of
real values is below the line segment joining two points of this graph) over convex sets. The property
that makes an optimization convex can make it in some sense "easier" than the general case - for example, any local
minimum must be a global minimum.
together with a convex, real-valued
defined on a convex subset (i.e a set
of two points that lie in an object whose straight line segment has points on it
that also lie in the object)
As such the issue is to find any point
for which the number
is smallest, i.e., a point
An optimization case (also referred to as mathematical programming
program or minimization problem) of finding some
is the feasible set and
is the objective, is called convex if
is a closed convex set and
is convex on
Alternatively, an optimization problem on the form
is called convex if the functions
The following statements are true about the convex minimization problem:
- if a
neighbourhood that need not be a global one) exists, then it is a
smallest value of a set, function, over its entire range).
- the set of all (global) minima is convex.
- for each strictly convex function, if the function has a minimum, then
the minimum is unique.
Standard form consists of the following three parts:
- A convex function
to be minimized over the variable
- Inequality constraints of the form
where the functions
- Equality constraints of the form
where the functions
are preserved linearly after the transformation. In practice, the terms "linear"
and "affine" are often used interchangeably. Such constraints can be expressed
in the form
is a column-vector and
a real number.
A convex minimization case is thus written as
Note that every equality constraint
can be equivalently replaced by a pair of inequality constraints
Following from this fact, it is easy to understand why
has to be affine(preserved linearly) as opposed to merely being convex. If
is convex, but
is concave. Therefore, the only way for
to be convex is for
to be affine.
Consider a convex minimization problem given in standard form by a cost function
and inequality constraints
Then the domain
Lagrangian function for the problem is
L(x,λ0,...,λm) = λ0f(x)
+ λ1g1(x) + ... + λmgm(x).
For each point x in X that minimizes f
over X, there exist real numbers λ0, ..., λm,
called Lagrange multipliers, that satisfy these
- x minimizes L(y, λ0, λ1, ..., λm)
over all y in X,
- λ0 ≥ 0, λ1 ≥ 0, ..., λm ≥ 0, with at
least one λk>0,
- λ1g1(x) = 0, ..., λmgm(x)
= 0 (complementary slackness).
If there exists a "strictly feasible point", i.e., a point z satisfying
g1(z) < 0,...,gm(z) < 0,
then the statement above can be upgraded to assert that λ0=1.
Conversely, if some x in X satisfies 1-3 for
scalars λ0, ..., λm
with λ0 = 1, then x is certain to minimize f over X.
Jean-Baptiste; Lemaréchal, Claude (1996). Convex analysis and minimization algorithms: Fundamentals.
Nemirovskiĭ, Arkadiĭ Semenovich (2001). Lectures on modern convex optimization: analysis,
algorithms, and engineering applications. pp. 335–336.
Boyd/Vandenberghe, p. 7
For methods for convex minimization, see the
volumes by Hiriart-Urruty and Lemaréchal (bundle) and the textbooks by
Ruszczynski and Boyd and Vandenberghe (interior point).
Hiriart-Urruty & Lemaréchal
(1993, Example XV.1.1.2, p. 277) discuss a "bad guy" constructed by Arkadi
In theory, quasiconvex programming and
convex programming problems can be solved in reasonable amount of time, where
the number of iterations grows like a polynomial in the dimension of the problem
(and in the reciprocal of the approximation error tolerated):
Kiwiel, Krzysztof C.
(2001). "Convergence and efficiency of subgradient methods for quasiconvex
minimization". Mathematical Programming (Series A) (Berlin, Heidelberg:
Springer) 90 (1): pp. 1–25.
1819784. Kiwiel acknowledges that
first established that quasiconvex minimization problems can be solved
Nemirovskii and Judin
Convex maximization is mentioned in the subsection
on convex optimization in this textbook: Ulrich Faigle, Walter Kern, and George Still. Algorithmic
principles of mathematical programming. Springer-Verlag. Texts in
Mathematics. Chapter 10.2, Subsection "Convex optimization", pages 205-206.
Theorem 32.1 in Rockafellar's Convex Analysis
states this maximum principle for extended real-valued
Sahni, S. "Computationally related problems," in
SIAM Journal on Computing, 3, 262--279, 1974.
Quadratic programming with one negative eigenvalue
is NP-hard, Panos M. Pardalos and Stephen A. Vavasis in Journal of Global
Optimization, Volume 1, Number 1, 1991, pg.15-22.