Simulations of semiconductor fabrication processes and of the electrical behavior of devices are very strong aids in advancing existing fabrication technologies. The engineer is able to choose among a relatively large number of commercially and freely available tools. The individual tools are thereby focused on a particular process step. To simulate a whole semiconductor fabrication process flow a coupling of various individual simulators becomes necessary.

The first part of this work deals with problems arising in the coupling of different kinds of process simulators. It is analyzed what kind of data and algorithms such simulations are based on. A generic data model suitable for process and device simulations, the so called WAFER-STATE-SERVER is then developed. This data model allows for an efficient data exchange between simulators even when they are based on different native file formats. It is able to manage geometries of different dimension, and to handle grids and distributed quantities stored thereon. The data model also defines algorithms to perform geometrical operations as they are used in topography simulations. Three process simulators developed at the Institute for Microelectronics are introduced by means of examples.

The second part of this thesis deals with various optimization tasks as they occur in the simulation of semiconductor devices. The WAFER-STATE-SERVER thereby aids in performing complex high level simulation tasks (e.g. optimization). Efficient optimization algorithms are inevitable to calibrate a simulator. A calibration of a simulator is performed by tuning the parameter of a certain simulator model until the deviation of the simulation result from measured data reaches a minimum. Other optimization fields are inverse modeling and the optimization of device characteristics. Four optimization strategies that are suited for such tasks are presented. These strategies are based on a local, two global, and a combination of a global and a local optimization module. An example optimization task is used to rate these strategies according to their efficiency $ \rightarrow$ the number of necessary single simulations, according to their robustness against simulation failures, and according to the necessary human interaction during the optimization. New concepts for a platform independent distribution of the workload over a cluster of workstations are presented.