5.1.1 Inverse Modeling

Inverse Modeling is a technique to adapt parameters of a physical model to a set of experimental data (measurements) such that the error between the output of the model (simulation) and the measurements is as small as possible. This is e.g. used to calibrate mobility models of device simulators to measured $ I_DV_D$, $ I_DV_G$, and $ CV$ curves of a transistor. Another challenging task of inverse modeling of semiconductor devices is to extract the dopant concentration profile of a device by means of a device simulator [84,85]. The device simulator thereby serves as a "meter" to extract $ I_DV_D$, $ I_DV_G$ and $ CV$ curves of an artificial device. The artificial device is characterized by a set of parameters describing geometry features and dopant concentration. Fig. 5.2 depicts such an artificial device with $ 4$ different doping "peaks" ($ N_1$ to $ N_4$). The device is symmetric along the dash-dotted line. The peaks are modeled as Pearson Type IV and Gaussian distribution functions as described in [86].

Figure 5.2: Device Model for Inverse Modeling.
\begin{figure}\centering\psfig{file=pics/dev-mod, width=0.75\linewidth} \end{figure}

This very simple devices are generated with the two-dimensional Wafer generator MAKEDEVICE which works much like a drawing tool except that it does not use a pointer like device but variables defined in an input deck to specify geometry parameters and dopant concentration.

The extracted operating points are then compared to measured ones. For $ n$ curves with dimensions $ N_j$ the target $ t_{opt}$ delivered to the optimizer is given as the component wise scaled quadratic deviation of the computed ($ pc_j$) from the measured ($ pm_j$) operating points:

$\displaystyle t_{opt} = \sqrt{\frac{1}{\sum_{j=1}^{n}{N_j}}\cdot \sum_{j=1}^{n}\sum_{i=1}^{N_J} S(pm_{ji}, pc_{ji})^2} \\ $ (5.2)

with

$\displaystyle S(a, b) = \left\{ \begin{array}{ll} 0 & \mbox{if $a = 0$\ and $b ...
...h{\left\vert b\right\vert}\right]}}} & \mbox{otherwise} \\  \end{array} \right.$ (5.3)

In (5.2) and (5.3) the relative error is scaled to values between $ \pm 100$. This is necessary to avoid a too large target value for error vectors where the difference for some components is in the range of several magnitudes. Once the optimizer is near an optimum the error vector is comparably small. However, during the computation of the gradient or during the evolution of a global optimizer intermediate parameter states will be generated that are far away from the optimum. Since a simulator will not stop and produce a result for an arbitrary given input, a large error vector is created by the optimization framework to indicate a failed simulation. For global optimizers, such an artificially large target encourages the optimizer to discard the state. Such states simply become extinct. For a local optimizer an artificial target value is the only way to continue in case of a simulation failure, although the usefulness is questionable5.1.

2003-03-27