Optimization (mathematics)

advertisement
Optimization (mathematics)
From Wikipedia, the free encyclopedia
In mathematics, the term optimization, or mathematical programming, refers to the study of problems
in which one seeks to minimize or maximize a real function by systematically choosing the values of
real or integer variables from within an allowed set. This problem can be represented in the following
way
Given: a function f : A \to R from some set A to the real numbers
Sought: an element x0 in A such that f(x0) ≤ f(x) for all x in A ("minimization") or such that f(x0) ≥
f(x) for all x in A ("maximization").
Such a formulation is called an optimization problem or a mathematical programming problem (a term
not directly related to computer programming, but still in use for example for linear programming see history below). Many real-world and theoretical problems may be modeled in this general
framework.
Typically, A is some subset of the Euclidean space Rn, often specified by a set of constraints,
equalities or inequalities that the members of A have to satisfy. The elements of A are called feasible
solutions. The function f is called an objective function, or cost function. A feasible solution that
minimizes (or maximizes, if that is the goal) the objective function is called an optimal solution.
The domain A of f is called the search space, while the elements of A are called candidate solutions or
feasible solutions.
Generally, when the feasible region or the objective function of the problem does not present
convexity, there may be several local minima and maxima, where a local minimum x* is defined as a
point for which there exists some δ > 0 so that for all x such that
the expression
holds; that is to say, on some region around x* all of the function values are greater than or equal to
the value at that point. Local maxima are defined similarly.
A large number of algorithms proposed for solving non-convex problems – including the majority of
commercially available solvers – are not capable of making a distinction between local optimal
solutions and rigorous optimal solutions, and will treat the former as actual solutions to the original
problem. The branch of applied mathematics and numerical analysis that is concerned with the
development of deterministic algorithms that are capable of guaranteeing convergence in finite time to
the actual optimal solution of a non-convex problem is called global optimization.
Notation
Optimization problems are often expressed with special notation. Here are some examples:
This asks for the minimum value for the objective function x2 + 1, where x ranges over the real
numbers R. The minimum value in this case is 1, occurring at x = 0.
106738101
1/4
This asks for the maximum value for the objective function 2x, where x ranges over the reals. In this
case, there is no such maximum as the objective function is unbounded, so the answer is "infinity" or
"undefined".
This asks for the value(s) of x in the interval [−∞, −1] which minimizes the objective function x2 + 1.
(The actual minimum value of that function does not matter.) In this case, the answer is x = −1.
This asks for the (x, y) pair(s) that maximize the value of the objective functionx·cos(y), with the
added constraint that x lies in the interval [−5, 5]. (Again, the actual maximum value of the expression
does not matter.) In this case, the solutions are the pairs of the form (5, 2πk) and (−5, (2k + 1)π),
where k ranges over all integers.
Major subfields
 Linear programming studies the case in which the objective function f is linear and the set A is
specified using only linear equalities and inequalities. Such a set is called a polyhedron or a
polytope if it is bounded.
 Integer programming studies linear programs in which some or all variables are constrained to take
on integer values.
 Quadratic programming allows the objective function to have quadratic terms, while the set A must
be specified with linear equalities and inequalities.
 Nonlinear programming studies the general case in which the objective function or the constraints
or both contain nonlinear parts.
 Convex programming studies the case when the objective function is convex and the constraints, if
any, form a convex set. This can be viewed as a particular case of nonlinear programming or as
generalization of linear or convex quadratic programming.
 Semidefinite programming (SDP) is a subfield of convex optimization where the underlying
variables are semidefinite matrices. It is generalization of linear and convex quadratic
programming.
 Second order cone programming (SOCP).
 Hyperbolic programming.
 Stochastic programming studies the case in which some of the constraints or parameters depend on
random variables.
 Robust programming is, as stochastic programming, an attempt to capture uncertainty in the data
underlying the optimization problem. This is not done through the use of random variables, but
instead, the problem is solved taking into account inaccuracies in the input data.
 Dynamic programming studies the case in which the optimization strategy is based on splitting the
problem into smaller subproblems.
 Combinatorial optimization is concerned with problems where the set of feasible solutions is
discrete or can be reduced to a discrete one.
 Infinite-dimensional optimization studies the case when the set of feasible solutions is a subset of
an infinite-dimensional space, such as a space of functions.
 Constraint satisfaction studies the case in which the objective function f is constant (this is used in
artificial intelligence, particularly in automated reasoning).
 Disjunctive programming used where at least one constraint must be satisfied but not all. Of
particular use in scheduling.
Techniques
For twice-differentiable functions, unconstrained problems can be solved by finding the points where
the gradient of the objective function is zero (that is, the stationary points) and using the Hessian
106738101
2/4
matrix to classify the type of each point. If the Hessian is positive definite, the point is a local
minimum, if negative definite, a local maximum, and if indefinite it is some kind of saddle point.
However, existence of derivatives is not always assumed and many methods were devised for specific
situations. The basic classes of methods, based on smoothness of the objective function, are:




Combinatorial methods
Derivative-free methods
First order methods
Second-order methods
Actual methods falling somewhere among the categories above include:











Gradient descent aka steepest descent or steepest ascent
Nelder-Mead method aka the Amoeba method
Subgradient method - similar to gradient method in case there are no gradients
Simplex method
Ellipsoid method
Bundle methods
Newton's method
Quasi-Newton methods
Interior point methods
Conjugate gradient method
Line search - a technique for one dimensional optimization, usually used as a subroutine for other,
more general techniques.
Should the objective function be convex over the region of interest, then any local minimum will also
be a global minimum. There exist robust, fast numerical techniques for optimizing twice differentiable
convex functions.
Constrained problems can often be transformed into unconstrained problems with the help of
Lagrange multipliers.
Here are a few other popular methods:











Hill climbing
Simulated annealing
Quantum annealing
Tabu search
Beam search
Genetic algorithms
Ant colony optimization
Evolution strategy
Stochastic tunneling
Differential evolution
Particle swarm optimization
Uses
Problems in rigid body dynamics (in particular articulated rigid body dynamics) often require
mathematical programming techniques, since you can view rigid body dynamics as attempting to solve
an ordinary differential equation on a constraint manifold; the constraints are various nonlinear
geometric constraints such as "these two points must always coincide", "this surface must not
penetrate any other", or "this point must always lie somewhere on this curve". Also, the problem of
computing contact forces can be done by solving a linear complementarity problem, which can also be
viewed as a QP (quadratic programming problem).
106738101
3/4
Many design problems can also be expressed as optimization programs. This application is called
design optimization. One recent and growing subset of this field is multidisciplinary design
optimization, which, while useful in many problems, has in particular been applied to aerospace
engineering problems.
Mainstream economics also relies heavily on mathematical programming. Consumers and firms are
assumed to maximize their utility/profit. Also, agents are most frequently assumed to be risk-averse
thereby wishing to minimize whatever risk they might be exposed to. Asset prices are also explained
using optimization though the underlying theory is more complicated than simple utility or profit
optimation. Trade theory also uses optimization to explain trade patterns between nations.
Another field that uses optimization techniques extensively is operations research.
History
The first optimization technique which is known as steepest descent goes back to Gauss. Historically,
the first term to be introduced was linear programming, which was invented by George Dantzig in the
1940s. The term programming in this context does not refer to computer programming (although
computers are nowadays used extensively to solve mathematical programs). Instead, the term comes
from the use of program by the United States military to refer to proposed training and logistics
schedules, which were the problems that Dantzig was studying at the time. (Additionally, later on, the
use of the term "programming" was apparently important for receiving government funding, as it was
associated with high-technology research areas that were considered important.)
106738101
4/4
Download