next up previous contents
Next: 4.1.2 Step direction Up: 4.1 Optimization Methods Previous: 4.1 Optimization Methods

4.1.1 Gradient Based Optimization Methods

The majority of optimization algorithms are based on methods using the gradient and higher derivatives of the target function. These methods approximate the target function $f(\vec{x})$ by a Taylor-series


\begin{displaymath}
f(\vec{x}_0 + \vec{p}) \approx
f(\vec{x}_0) + ({\mathop{\...
...}^{\cal T} {\mathop{\nabla }\nolimits ^2 f(\vec{x}_0)} \vec{p}
\end{displaymath} (4.5)

around $\vec{x}_0$.

From this equation the gradient and, for the second order approximations, the curvature of the target function in the point $\vec{x}_0$ is used to calculate the next step.




R. Plasun