next up previous contents
Next: 4.5 The Parameter Administration Up: 4. The Assembly Module Previous: 4.3 Refined Key Demands


4.4 Condition of the Linear System

The use of the discretized and linearized semiconductor equations yields large linear equation systems of the form $ \ensuremath{\mathbf{A}} \ensuremath{\mathbf{x}} = \ensuremath{\mathbf{b}}$. Such a system has to be solved with a given accuracy.

As described in [65], the results of both direct and iterative solvers depend on the accuracy of digitally stored numbers and the condition of the system matrix $ {\ensuremath{\mathbf{A}}}$. The internal storage representation of numbers is responsible for their accuracy. The assembly module currently provides the C data type double according to its IEEE norm 754-1985 standard for binary floating-point arithmetic. This standard defines four binary formats for 32-bit single, (normally 43-bit) single-extended, 64-bit double, and (normally 80-bit) double-extended precision numbers. They are composed of three parts. A double precision number has a sign bit being either zero or one, an eleven bit exponent ranging from $ E_{\mathrm{min}}=-1022$ to $ E_{\mathrm{max}}=1023$, and a 52 bits fraction [60]. The standard also defines how zero, infinity, and so-called NaNs (Not a Number) are to be encoded. NaNs represent undefined or invalid results, for example the square root of a negative number. For the sake of completeness it is stated that a complex-valued number stores both the real and imaginary part in the double precision format. Furthermore, due to the limited representation results of mathematical formulae generally vary although they are correctly changed according to mathematical laws such as the commutative law.

The condition of a matrix can be used to estimate the worst possible error of the solution vector $ \mathbf{x}$ of $ \ensuremath{\mathbf{A}} \ensuremath{\mathbf{x}} = \ensuremath{\mathbf{b}}$ obtained by Gaussian elimination using a restricted number representation. This is measured by the condition number of that matrix. Well-conditioned matrices have a small condition number [51]:

$\displaystyle \kappa_\infty = \Vert\ensuremath{\mathbf{A}}\Vert _\infty \Vert\ensuremath{\mathbf{A}}^{-1}\Vert _\infty\ ,$ (4.1)

with the infinity norm for the system matrix

$\displaystyle \Vert\ensuremath{\mathbf{A}}\Vert _\infty \max_i \sum_{k=1}^{n} \vert a_{i,k}\vert\ .$ (4.2)

For iterative solvers the spectral condition norm (spectrum of the system matrix) is more characteristic [78]. It is defined as ratio of the largest eigenvalue to the smallest one:

$\displaystyle \ensuremath{\kappa_{\mathrm{s}}}= \frac{\lambda_{\max}}{\lambda_{\min}}\ .$ (4.3)

The larger the value of $ \kappa_{\mathrm{s}}$ the poorer is the condition of the system matrix. Iterative solvers are particularly sensitive to bad condition numbers which can then cause

An important way to handle ill-conditioned matrices ( $ \kappa(\ensuremath{\mathbf{A}}) \gg 1$) is to precondition the matrix $ \mathbf{A}$. Hence, iterative methods usually determine a second matrix that transforms the system matrix into one with a better condition. This second matrix is called a preconditioner and improves the convergence of the iterative solver (see Section 5.2.5).

Beside the purely numerical concept of preconditioning, a good approach to improve the condition during the assembly process is to aim for diagonal dominance of the equations, since it is a necessary (but not sufficient) condition in the proof of convergence for a range of iterative solver schemes. A matrix is diagonally dominant, if in all equations the absolute value of the diagonal element is larger than the sum of the absolute values of all off-diagonal elements in all equations, and equal in at least one row [52]:

$\displaystyle \sum_{j=1}^{n} \left\vert A_{i,j} \right\vert \le \left\vert A_{i,i} \right\vert \ , \quad \mathrm{for} \quad i \ne j \ .$ (4.4)

A matrix is strictly diagonally dominant if in all equations the absolute value of the diagonal element is larger than the sum of the absolute values of all off-diagonal elements [52]:

$\displaystyle \sum_{j=1}^{n} \left\vert A_{i,j} \right\vert < \left\vert A_{i,i} \right\vert \ , \quad \mathrm{for} \quad i \ne j \ .$ (4.5)

Besides of this advantage it is important to note that in case of diagonal dominance direct solution techniques can apply a diagonal strategy and avoid alternative and costly pivoting steps [188].


next up previous contents
Next: 4.5 The Parameter Administration Up: 4. The Assembly Module Previous: 4.3 Refined Key Demands

S. Wagner: Small-Signal Device and Circuit Simulation