The LEVENBERG-MARQUARDT algorithm [242] is an efficient method to solve non-linear least squares problems [243]. Thus, it is well suited for complex inverse modeling tasks especially for TCAD applications where the aim of the LEVENBERG-MARQUARDT algorithm is to optimize (minimize) a twice differentiable function

$\displaystyle f({\mathbf{{x}}}):\mathbb{R}^n\rightarrow\mathbb{R}.$ (4.22)

If the original objective function is vector valued, an additional norm has to applied to map the vector to a scalar-valued quantity. The second derivative of the function $ f$ is determined by its HESSian4.6 matrix. Because the optimization tasks for TCAD problems cannot be described analytically, the derivatives have to be calculated for each single point. Since there is no guarantee that the HESSian $ H(f,{\mathbf{{x}}})$ is positive definite for non-quadratic forms, the search algorithm might search in the wrong direction. Therefore, a correction term can be introduced to cover this problem by [242]

$\displaystyle H^k({\mathbf{{x}}}^k) {=}H(f,{\mathbf{{x}}}^k) + \nu^k \tilde{I}.$ (4.23)

If $ H^k({\mathbf{{x}}}^k)$ is still not positive definite, the factor $ \nu^k$ is increased by a certain user-defined factor. Since $ H^k({\mathbf{{x}}}^k)$ is now per definitionem positive definite, the next point $ {\mathbf{{x}}}^{k+1}$ can be calculated by

$\displaystyle {\mathbf{{x}}}^{k+1} {=}{\mathbf{{x}}}^k - H^k({\mathbf{{x}}}^k) \cdot \nabla f({\mathbf{{x}}}^k).$ (4.24)

However, if there is no improvement in the last minimization step ( $ f({\mathbf{{x}}}^{k+1})
> f({\mathbf{{x}}}^k)$ ), the factor $ \nu^k$ has to be modified again and the previously described steps have to be recalculated.

This method is a more robust method than the GAUSS-NEWTON method [244] and provides in general an optimum on less iterations. Nevertheless, if the initial guess of $ {\mathbf{{x}}}$ is too close to the optimal value, the convergence might be slower than that of the GAUSS-NEWTON method.

Stefan Holzer 2007-11-19