According to chosen constraints, a suitable appropriate optimization strategy has to be chosen, where the desired constraints can be described mathematically. Mostly, input parameters are bound and have thus a lower and an upper bound. These bounds can be even more restrictive if constraint functions are applied which limit the allowed set of parameters in the hyper-cube of the bound parameter space.

A-priori constraints can be checked in advance of the simulation run thus
before the time is wasted for a parameter set which is known as infeasible.
However, certain constraints can be checked only at the end of the simulation.
If for instance a constraint function contains a simulation result, e.g. the
electrical conductivity, then an apparent constraint is to demand that this
quantity remains greater than zero^{4.10}.
But without an application interface to the
simulator, the optimizer can only verify this quantity at the end.
However, a possible way to overcome a probable long waiting period, the function
can be approximated in advance of the simulation. This method is actually a
response surface approach for a single quantity.
Wrong estimates of such a-priori approximations may cause significant loss
of the search space and may result in failing to find the global optimal value.

Thus, it is important to determine the lower and the upper bounds for the input
parameters carefully because too wide parameter ranges can cause a considerable
large number of parameter evaluations and too narrow bounds may oversee certain
still feasible parameter constellations which might yield to an optimum.
Since not all optimization strategies provide an appropriate treatment of
constraint functions, many optimization frameworks offer the possibility to
define barrier or penalty functions to provide an optimization set up which is
valid for several optimization strategies^{4.11}.

Stefan Holzer 2007-11-19