next up previous contents
Next: 4.3.1 Mathematical Background Up: 4. Optimizer Previous: 4.2 Genetic Algorithms


4.3 Least-Squares Problems

In least-squares problems the target function of the optimization problem is built from the square of the two-norm (see Appendix B.3) of the residual vector. This residual is usually computed by comparing the measured and calculated data.

Figure 4.1: Structure of a least-squares problem.
\resizebox{9cm}{!}{
\psfrag{Model}{Model}
\psfrag{M}{$\mathcal{M}$}
\psfrag{n...
...sidual Vector $\vec{f}$}
\includegraphics[width=9.cm]{graphics/lmstruct.eps}
}

The model $\mathcal{M}$ in Figure 4.1 determines a set of simulated data $\vec{y}$ from a parameter vector $\vec{p}$ which is compared with measured data. The model can consist of one or several simulation steps with additional data conversions and result extraction steps. The parameter vector $\vec{p}$ contains various simulation parameters, used during the evaluation of model $\mathcal{M}$. The resulting data vector $\vec{y} = \mathcal{M}(\vec{p})$ is the extracted result of the performed simulations. The residuum vector $\vec{f}$ is calculated by subtracting the measured data $\vec{d}$ from the result vector $\vec{y}$.

In general the model $\mathcal{M}$ constitutes a non-linear function and the optimization problem is of a nonlinear least-squares type4.3. Normally the extracted data vector $\vec{y}$ is dependent on the available measured data. So the simulations have to be performed at the same data points as the measurements, or general results have to be interpolated by the tool calculating the residuals. These dependencies are not shown in Figure 4.1. Common nonlinear least-squares optimization problems occur in data fit, calibration, and inverse modeling tasks.



Footnotes

... type4.3
For linear model functions the problem is a linear least-squares problems which is described in Section 3.2.2.



next up previous contents
Next: 4.3.1 Mathematical Background Up: 4. Optimizer Previous: 4.2 Genetic Algorithms

R. Plasun