next up previous contents
Next: 4.1.8 Trust Region Method Up: 4.1 Optimization Methods Previous: 4.1.6.1 Hessian Matrix


4.1.7 Step Length Method

Step length methods are iterative methods where the parameter vector of the next iteration is calculated by

\begin{displaymath}
\vec{x}_{k+1} = \vec{x}_{k} + \alpha_{k} \vec{p}_{k}
.
\end{displaymath} (4.24)

In (4.24) $\vec{x}_k$ is the parameter vector of the current iteration, $\vec{p}_k$ is the step direction, and $\alpha_{k}$ is the step length.

The step direction is calculated by the Newton or steepest-descent method as described in Section 4.1.4 and Section 4.1.3. Finding the step length is a univariate minimization problem

\begin{displaymath}
\min_{\alpha_{k}} f(\vec{x}_{k} + \alpha_{k} \vec{p}_{k})
\end{displaymath} (4.25)

to find the minimum of the cost function along the search direction. In this type of optimization method the selection of the direction is independent from that of the step length.




R. Plasun