8.1 Introduction

During the last three or four decades there has been increasing interest in optimization algorithms which work similarly to processes found in nature. The methods of genetic algorithms [40,49,77] and evolutionary strategies [101], although having different roots, have converged and are now commonly known under the name of evolutionary computation. A journal of the same name [59] has been published since 1993.

A brief outline of evolutionary algorithms is as follows. They work with sets (or populations) of potential solutions. Starting from a random population, each individual is assigned a score via an objective function. In the following selection step certain individuals are chosen to proliferate and to form a new population. Operators like mutation and crossover are applied to the individuals of the new population with prescribed probabilities, and the population is evaluated again. New generations are formed in this way until the population has converged or a termination condition is fulfilled. Obviously many alternatives for every step in this algorithm exist and have been described and evaluated in many publications [77].

Although the Schema Theorem [40,77] and similar statements explain how evolutionary algorithms work in a quantifiable way, and many special algorithms have been intensively studied, no consistent theory of evolutionary computation exists to date and the question of which evolutionary algorithm to choose for a given problem can only be answered by experimentation and experience. Nevertheless evolutionary algorithms proved to be very general and valuable tools. While domain specific optimizers typically perform better than their general evolutionary algorithm counterpart, evolutionary algorithms can easily be adapted to the problem at hand and are usable whenever the lack of detailed knowledge about the goal function prohibits developing a domain specific optimization algorithm.

Here the state of the art algorithms used later (cf. Part III) for several optimization and inverse modeling problems are described. It is not intended to give an exhaustive overview of all evolutionary computation methods in use today, but to focus on the methods that were found to be most valuable for optimization problems with real valued variables. Furthermore the number of evaluations of the computationally expensive objective function is severely restricted in real world TCAD problems and this restriction was of course the most demanding one.

In the following we will first discuss the representation of discrete and real valued variables and define various operators for continuous variables. This will provide the building blocks for the genetic algorithms in Section 8.4. In Section 8.5 the working of genetic algorithms will be investigated in a quantified manner.

Clemens Heitzinger 2003-05-08