In contrast to the coordinate search algorithm depicted in Section 4.2.1 where the information about the given problem is limited to score values of sample points of the parameter space, gradient-based optimization methods offer the advantage to construct additional information about the shape of the surface for the particular problem. Hence, the gradient of a function provides information about the behavior of a function such as steepness and extrema in the parameter space. With this additional information, the convergence of the search algorithm can be drastically enhanced.

However, information about the gradient is often not available. Therefore, the algorithm has to provide a procedure to ensure the evaluation of a gradient by suggesting additional points for the computation of the gradient in a certain point, as it is shown in Figure 4.3b. In the case that the evaluation of the point resulting from the gradient provides no improvement with respect to the score function, the step length in the direction of the gradient is reduced by a user-defined factor. If this measure also provides no additional improvements, the algorithm has reached its termination criterion and the algorithm stops and provides the best result as the optimization result.

- 4.2.2.1 Newton Algorithm for Optimization
- 4.2.2.2 Response Surface Method
- 4.2.2.3 Levenberg-Marquardt Algorithm

Stefan Holzer 2007-11-19