next up previous contents
Next: 4.1 Optimization Methods Up: Dissertation Previous: 3.4.2 Comparison


4. Optimizer

In this work the term ``optimization'' is used for the search of an optimal value for the objective function within allowed ranges of the variables.

An optimization problem can be expressed as a maximization or minimization task; for example maximize the profit or minimize the cost. The objective function $f(\vec{x})$ reflects if a particular set of input parameters (a vector $\vec{x_i}$) gives a good or bad result of the analyzed model function.

It is common practice to formulate the optimization problem as a minimization of the objective function f which depends on the variables $\vec{x}$

\begin{displaymath}
\min_{x \in \mathbb{R}^{n}} f(\vec{x})
.
\end{displaymath} (4.1)

$f(\vec{x})$ is also called the target function or cost function. If the dimension of $\vec{x}$ is one (it is a scalar) it is a univariate, else it is a multivariate optimization problem.

When no additional conditions are supplied for the input parameter vector it is an unconstrained optimization problem, where any value of $\vec{x} \in \mathbb{R}^{n}$ is a feasible point. In constrained optimization problems equality or inequality constraints reduce the input parameter space. These can be expressed by


$\displaystyle \min f(\vec{x})$     (4.2)
$\displaystyle h_i(\vec{x})$ = $\displaystyle 0 \quad i = 1, \ldots , m$ (4.3)
$\displaystyle g_j(\vec{x})$ $\textstyle \ge$ $\displaystyle 0 \quad j = 1, \ldots , p$ (4.4)

with m equality constraints g and p inequality constraints g.

An extremum is a global optimum if it is truly the highest or lowest function value, as opposed to a local optimum which is the highest or lowest function value within a finite neighborhood of the starting point.




next up previous contents
Next: 4.1 Optimization Methods Up: Dissertation Previous: 3.4.2 Comparison

R. Plasun