next up previous contents
Next: 2. The Three-Dimensional Electron Up: Dissertation Martin-Thomas Vasicek Previous: List of Figures

Subsections


1. Theory of Transport Modeling

THE FUNDAMENTAL equation of semi-classical transport is Boltzmann's transport equation (BTE). A common transport model, which can be easily derived from BTE is the drift-diffusion model, which is the workhorse of today's Technology Computer Aided Design (TCAD) tools. However, driven by Moore's law, the device dimensions of modern semiconductors decrease into the deca-nanometer regime following that the drift-diffusion model gets more and more inaccurate. A solution is to use more sophisticated models based on BTE like the Monte Carlo approach (MC) [22,23,24,25]. The disadvantage of the MC technique is it's high computational effort due to the statistical approach and hence rather less suitable for engineering applications. However, the results obtained from the MC simulation method are often in a good agreement with the experiment [1] and frequently used as a benchmark for other simpler models.

For engineering purposes, higher-order macroscopic models based on the method of moments such as the hydrodynamic, six moments, or even higher-order models are adequate approaches for modeling sub-microscopic devices [21]. A detailed discussion will follow in the sequel.

Another promising approach for solving the BTE is the method of spherical harmonics [26]. The underlying idea is to expand the distribution function into spherical harmonics and exploit the orthogonality of the basis (see Section 1.4.5).

Since the BTE is a semi-classical equation including both Newton mechanics and the quantum mechanical scattering operator, transport models, as explained above, are only valid in a certain regime, where quantum effects like source to drain tunneling play only a negligible role (see Fig. 1.1) [27,28,29].

Figure 1.1: Hierarchy of transport models
\includegraphics[width=14.5cm]{figures/svg/transport_models.eps}

In order to cover the range of gate lengths below  $ {\mathrm{10}}{\;}{\mathrm{nm}}$ , where source to drain tunneling plays an important role, several quantum mechanical approaches have been developed. The Landauer-Büttiker approach is valid in a ballistic regime, where the carriers are not affected by scattering [30]. This approach is based on a generalization of the conduction characterized by the transmission and reflection of the carriers. However, in general, transport models in the deca-nanometer regime are based on the fundamental equation of quantum mechanics, the Schrödinger equation (see Fig. 1.1).

The non-equilibrium Green's function (NEGF) formalism is a powerful method to handle open quantum systems [31]. The method can be used both in a ballistic and a scattering dominated regime (see Section 1.3.1). If the mean free path of the carriers is smaller than the device size, the system is in a scattering dominated regime, while if the mean free path is longer than the device the system can be described ballistically.

Other quantum approaches are based on a MC solution of the Wigner equation [32]. The advantage, compared to NEGF is that the methods comprise the coordinates and the momentum as the degree of freedom. In the NEGF method, the additional degree of freedom is the energy. In the classical limit of Wigner Monte Carlo, the results converge to the Monte Carlo results based on the BTE. A drawback of the Wigner equation is that the equation is not positive definite. In literature, this is known as the negative-sign problem [33]. This method can be used as well in the ballistic as in the scattering dominated regime.

The so called Pauli Master equation is derived from the Liouville von Neumann equation. The Liouville von Neumann equation describes the quantum evolution of the density matrix and forms the fundamental equation for the Pauli master equation. The Pauli master equation is a frequently used model of irreversible processes in simple quantum systems and can be used in the ballistic and in the scattering dominated regions [34,35].

The Lindblad master equation, the last point of the mentioned quantum models in Fig. 1.1, is the most general form of the Liouville von Neumann equation. It characterizes the non-unitary evolution of the density matrix. The elements of the density matrix are trace preserving and positive [36,37,38].

Other methods to cover gate lengths below the semi-classical regime are quantum macroscopic transport models. These models as the quantum drift-diffusion model or the quantum hydrodynamic model can be derived from the Schrödinger equation using the so called Madelung transformation [39]. A discussion will follow in Section 1.3.3.

All models have Poisson's equation in common, which describes the electrostatics of the system. For a new simulation strategy, it is also important to investigate the underlying material and to compare the simulation results with measurement data.

1.1 Basic Equations

The basic equations of quantum and classical device simulations, namely Poisson's equation, the Schrödinger equation, and the BTE with its solution, the distribution function, are derived and discussed in this section.


1.1.1 Derivation of Poisson's Equation

Poisson's equation is the basic equation of electrostatics [40,41]. It can be derived inserting the definition of the electric field $ \ensuremath{\ensuremath{\mathitbf{E}}}$

$\displaystyle \ensuremath{\ensuremath{\mathitbf{E}}}= -\ensuremath{\ensuremath{\ensuremath{\mathitbf{\nabla_{\!r}}}}}\ensuremath{\varphi}{\;},$ (1.1)

into the second Maxwell equation

$\displaystyle \ensuremath{\ensuremath{\ensuremath{\mathitbf{\nabla_{\!r}}}}}\cdot\ensuremath{\ensuremath{\mathitbf{D}}}=\ensuremath{\rho}{\;}.$ (1.2)

Here, $ \ensuremath{\varphi}$ denotes the electrostatic potential, $ \ensuremath{\rho}$ represents the charge density, and $ \ensuremath{\ensuremath{\mathitbf{D}}}$ is the electric displacement field defined as

$\displaystyle \ensuremath{\ensuremath{\mathitbf{D}}}=\ensuremath{\epsilon}\ensuremath{\ensuremath{\mathitbf{E}}}{\;}.$ (1.3)

Combining equation (1.1) with (1.2) yields Poisson's equation

$\displaystyle \ensuremath{\ensuremath{\ensuremath{\mathitbf{\nabla_{\!r}}}}}\cd...
...h{\mathitbf{\nabla_{\!r}}}}}\ensuremath{\varphi}\right)=-\ensuremath{\rho}{\;},$ (1.4)

whereby $ \ensuremath{\rho}$ can be expressed as

$\displaystyle \ensuremath{\rho}=\mathrm{q}\left(\ensuremath{p}-\ensuremath{n}+\ensuremath{N}_\mathrm{a}-\ensuremath{N}_\mathrm{d}\right){\;}.$ (1.5)

$ \ensuremath{p}$ , and $ \ensuremath{n}$ denote the holes and the electron concentration, respectively, whereas $ \ensuremath{N}_\mathrm{a}$ and $ \ensuremath{N}_\mathrm{d}$ are the concentration of acceptors and donors [42]. Inserting (1.5) into (1.6) yields

$\displaystyle \ensuremath{\ensuremath{\ensuremath{\mathitbf{\nabla_{\!r}}}}}\cd...
...-\ensuremath{n}+\ensuremath{N}_\mathrm{a}-\ensuremath{N}_\mathrm{d}\right){\;}.$ (1.6)

A complete description of transport within a device is achieved solving Poisson's equation self-consistently with the appropriate formulation of carrier transport within the semiconductor.


1.1.2 Schrödinger-Poisson System

In classical physics, the evolution in time and space of an ensemble of particles can be characterized using Newton's law. As described at the beginning of the chapter, transport below a gate length of 10$ {\;}$ nm cannot be treated anymore with classical physics. In the nanometer regime, particles must be described by their wave functions $ \psi(\ensuremath{\ensuremath{\mathitbf{r}}} ,t)$ , which can be derived from the time-dependent single particle Schrödinger equation [43]

$\displaystyle \left(-\frac{\hbar ^2}{2\ensuremath{m}}\ensuremath{\ensuremath{\e...
...partial_{t} \ensuremath{\psi(\ensuremath{\ensuremath{\mathitbf{r}}} ,t)}}}{\;}.$ (1.7)

The Schrödinger equation characterizes a particle moving in a region under the influence of the potential energy $ \ensuremath{V(\ensuremath{\ensuremath{\mathitbf{r}}} )}$  [44]. A solution strategy is a separation ansatz of the wavefunction into a time $ \ensuremath{T(t)}$ and space $ \ensuremath{\phi(\ensuremath{\ensuremath{\mathitbf{r}}} )}$ component

$\displaystyle \ensuremath{\psi(\ensuremath{\ensuremath{\mathitbf{r}}} ,t)}=\ensuremath{\phi(\ensuremath{\ensuremath{\mathitbf{r}}} )}\ensuremath{T(t)}{\;}.$ (1.8)

With this separation, equation (1.7) can be decoupled into a time-dependent

$\displaystyle \ensuremath{i}\hbar \ensuremath{\ensuremath{\partial_{t} \ensuremath{T(t)}}} =\alpha\ensuremath{T(t)}$ (1.9)

and a space-dependent part

$\displaystyle \left(-\frac{\hbar ^2}{2\ensuremath{m}}\ensuremath{\ensuremath{\e...
...f{r}}} )}=\alpha\ensuremath{\phi(\ensuremath{\ensuremath{\mathitbf{r}}} )}{\;},$ (1.10)

whereby $ \alpha$ denotes the energy eigenvalue. For a free particle ( $ \ensuremath{V(\ensuremath{\ensuremath{\mathitbf{r}}} )}=0$ ), plain waves are the solution of the time-independent Schrödinger equation.

The quantum mechanical current density is defined as [45]

$\displaystyle \ensuremath{\ensuremath{\mathitbf{J}}}=\frac{\ensuremath{i}\hbar ...
...!r}}}}}\ensuremath{\psi(\ensuremath{\ensuremath{\mathitbf{r}}} ,t)}\right){\;},$ (1.11)

with $ \ensuremath{\psi^\dagger(\ensuremath{\ensuremath{\mathitbf{r}}} ,t)}$ as the transposed conjugate complex form of $ \ensuremath{\psi(\ensuremath{\ensuremath{\mathitbf{r}}} ,t)}$ .

However, the situation is more complex in real semiconductors. Here, the band structure together with the electrostatic energy $ \ensuremath{V(\ensuremath{\ensuremath{\mathitbf{r}}} )}=\mathrm{q}\ensuremath{\varphi}$ as described in Section 1.1.1 plays an important role. Fig. 1.2 shows a cross-section of a typical p-type MOSFET device under inversion [46]. Due to the applied gate voltage, the conduction band forms a potential well. Therefore, the so called quantum confinement occurs, if the carriers in bound states cannot propagate to infinity. Hence, the potential well forms a boundary condition.

Figure 1.2: Quantum confinement in a MOSFET structure
\includegraphics[width=11.5cm]{figures/svg/quantum_confinement.eps}
Within this well, discrete energy levels, the so called energy subbands occur, which will be discussed in Section 1.2. Due to the quantum confinement, the carriers cannot move in the direction perpendicular to the interface, and therefore carrier transport is just in the two-dimensional plane parallel to the oxide [47]. For increasing lateral fields the carriers can go beyond the last occupied subband and become a three-dimensional carrier gas. This can be explained as follows:

Assuming a triangular potential, which is an analytical approximation for the inversion layer, the wavefunctions are the well known Airy functions, whereas the energy eigenvalues can be expressed as [48]

$\displaystyle \ensuremath{\mathcal{E}}_\mathrm{i}=c\left(i-\frac{1}{4}\right)^{2/3}{\;},$ (1.12)

with $ c$ as a constant and $ i$ is the number of the $ i$ th subband. The difference between the $ i$ th and the $ (i-1)$ th subband can be written as

$\displaystyle \ensuremath{\mathcal{E}}_\mathrm{i}-\ensuremath{\mathcal{E}}_{\ma...
...{1}{4}\right)^{2/3}-\left(\left(i-1\right)-\frac{1}{4}\right)^{2/3}\right){\;}.$ (1.13)

This difference as a function of the number of subbands is visualized in Fig. 1.3. For increasing number of subbands the energy gap decreases. Furthermore for an infinite number of energy eigenvalues this gap converges to zero. Thus, the subband system is transformed into a bulk system for an infinite number of subbands.

Figure 1.3: For increasing subbands the difference between the energy eigenvalues decreases and converge to zero for an infinite number of energy-eigenstates. In this limit the subband system becomes a bulk system.
\includegraphics[width=0.5\textwidth]{rot_figures_left/simulation/subband_diff/subb_diff_ana.eps}

In order to correctly describe energy eigenvalues and wavefunctions in a device, the Schrödinger equation has to be solved self-consistently with Poisson's equation [49,2]. The starting point for solving the system is a potential distribution, which leads to charge neutrality. Inserting the potential into the Schrödinger equation, one obtains the initial energy eigenvalues $ \ensuremath{\mathcal{E}}_{\ensuremath{l}}$ and wavefunctions $ \ensuremath{\phi(\ensuremath{\ensuremath{\mathitbf{r}}} )}$ for the quantum mechanical carrier concentration defined as

$\displaystyle \ensuremath{n}\left(\ensuremath{\ensuremath{\mathcal{E}}_{\ensure...
...\ensuremath{\phi(\ensuremath{\ensuremath{\mathitbf{r}}} )}\mathrm{\vert}^2{\;}.$    

Here,  $ \ensuremath{N_{\mathrm{gv}}}$ denotes the effective density of states

$\displaystyle \ensuremath{N_{\mathrm{gv}}}=2\ensuremath{g_v}{\;},$ (1.14)

with $ \ensuremath{g_v}$ as the degeneracy of the system.

The next step is a recalculation of the potential by Poisson's equation followed by new wavefunctions and energy eigenvalues. These steps are performed in a loop until the update is below a certain limit, thus convergency is reached.

Figure 1.4: Conduction band and the first two wavefunctions of a thin film SOI MOSFET for different gate voltages. For increasing gate voltages the wavefunctions are shifted towards the interface.
\includegraphics[width=0.5\textwidth]{rot_figures_left/simulation/transport/vmc2deg/wave_conduction.eps}
Results from a Schrödinger-Poisson solver are presented in Fig. 1.4 [50].

Here, the first energy subband together with two wavefunctions for different gate voltages as a function of the channel thickness are highlighted. Due to the shift of the wavefunctions to the oxide for high gate voltages, the carriers are strongly localized. Therefore, the impact of interface effects as surface roughness scattering for high gate voltages is very strong.


1.1.3 Boltzmann's Transport Equation

The basic equation of macroscopic transport description, the BTE, is derived using fundamental principles of statistical mechanics. The first part concerns the discussion of the solution of the BTE, whereas the second part is devoted to its derivation.


1.1.3.1 The Distribution Function

The carrier distribution function  $ f(\ensuremath{\ensuremath{\mathitbf{r}}} ,\ensuremath{\ensuremath{\mathitbf{k}}},t)$ is the solution of the BTE [1]. $ f(\ensuremath{\ensuremath{\mathitbf{r}}} ,\ensuremath{\ensuremath{\mathitbf{k}}},t)$ is the probability of the number of particles having approximately the momentum $ \hbar \ensuremath{\ensuremath{\mathitbf{k}}}$ near the position $ \ensuremath{\ensuremath{\mathitbf{r}}} $ in phase space and time $ \ensuremath{{t}}$ . Taking Pauli's exclusion principle into account, the thermal equilibrium distribution function is the Fermi-Dirac function (equilibrium solution of the BTE)

$\displaystyle \ensuremath{f}\left(\ensuremath{\mathcal{E}}\right)=\ensuremath{C...
...hrm{f}}{k_\ensuremath{\mathrm{B}}\ensuremath{T_\ensuremath{n}}}}\right)+1}{\;},$ (1.15)

with $ C$ as a normalization factor, whereas $ \ensuremath{\mathcal{E}}_\mathrm{f}$ is the Fermi energy. Fig. 1.5 shows the Fermi-Dirac function for 0$ {\;}$ $ \mathrm{K}$ , 300$ {\;}$ $ \mathrm{K}$ , and 500$ {\;}$ $ \mathrm{K}$ . For 0$ {\;}$ $ \mathrm{K}$ , the Fermi-Dirac function can be written as a step function $ \theta\left(\ensuremath{\mathcal{E}}_\mathrm{f}-\ensuremath{\mathcal{E}}\right)$ . Hence, all states are fully occupied below $ \ensuremath{\mathcal{E}}_\mathrm{f}$ . For increasing temperatures, states above $ \ensuremath{\mathcal{E}}_\mathrm{f}$ can be occupied, which results in a smoother transition. The energy range of the transition region is $ k_\ensuremath{\mathrm{B}}\ensuremath{T}$ (see Fig. 1.5).
Figure 1.5: Fermi-Dirac distribution function for $ \mathrm {0 K}$ , $ \mathrm {300 K}$ , $ \mathrm {500 K}$ , and the limit $ k_\ensuremath {\mathrm {B}}\ensuremath {T}\ll \ensuremath {\mathcal {E}}-\ensuremath {\mathcal {E}}_\mathrm {f}$ is demonstrated. In this limit, the Fermi-Dirac function can be approximated by a Maxwell distribution function.
\includegraphics[width=0.5\textwidth]{rot_figures_left/simulation/distribution/distribution.eps}
If the relation

$\displaystyle k_\ensuremath{\mathrm{B}}\ensuremath{T}\ll\left(\ensuremath{\mathcal{E}}-\ensuremath{\mathcal{E}}_\mathrm{f}\right)$ (1.16)

is fulfilled, the Fermi-Dirac distribution function can be approximated by the Maxwell distribution function

$\displaystyle \ensuremath{f}\left(\ensuremath{\mathcal{E}}\right)=\ensuremath{C...
...h{\mathcal{E}}_\mathrm{f}}{k_\ensuremath{\mathrm{B}}\ensuremath{T}}\right){\;}.$ (1.17)

The Maxwell distribution function neglects Pauli's exclusion principle. Therefore, the validity is limited to lowly doped, non-degenerate semiconductors.

1.1.3.2 Diffusion Approximation

An important approximation used in the derivation of macroscopic transport models (see Section 1.4.3) is the diffusion approximation, which will be discussed here.

Every distribution function can be split into a symmetric $ \ensuremath{f_\mathrm{s}}$ and an anti-symmetric $ \ensuremath{f_\mathrm{a}}$ part as

$\displaystyle \ensuremath{f}= \ensuremath{f_\mathrm{s}}+ \ensuremath{f_\mathrm{a}}{\;}.$ (1.18)

Within the diffusion approximation it is assumed that the displacement of the distribution function is small, which means that $ \ensuremath{f_\mathrm{a}}$ is much smaller than $ \ensuremath{f_\mathrm{s}}$  [51]. One of the consequences of this approximation is that only the diagonal elements of the average tensorial product of for instance the momentum

$\displaystyle \hbar^2\ensuremath{\langle \ensuremath{\ensuremath{\mathitbf{k}}}...
...k_x {\;}\ensuremath{{\;}\mathrm{d}}k_y {\;}\ensuremath{{\;}\mathrm{d}}k_z}{\;},$ (1.19)

contribute, while the off diagonal elements can be neglected, due to symmetry reasons. The average operation used in equation (1.19) is the normalized statistical average and will be described in Section 1.4.3.

In general, the average energy of the carriers can be decomposed into

$\displaystyle \ensuremath{\langle \ensuremath{\mathcal{E}} \rangle}=\ensuremath...
...} + \ensuremath{\langle \ensuremath{\mathcal{E}} \rangle}_{\mathrm{therm}}{\;},$ (1.20)

where  $ \ensuremath{\langle \ensuremath{\mathcal{E}} \rangle}_{\mathrm{kin}}$ is the kinetic energy part and $ \ensuremath{\langle \ensuremath{\mathcal{E}} \rangle}_{\mathrm{therm}}$ is the thermal energy part of the average carrier energy. Within the diffusion approximation the kinetic part of equation (1.20) is neglected.


1.1.3.3 Derivation of Boltzmann's Transport Equation

The Boltzmann's transport equation is derived from a fundamental principle of classical statistical mechanics, the Liouville theorem [52,53]. The proposition asserts that the many particle distribution function $ f(\ensuremath{r}_1,...\ensuremath{r}_n,\ensuremath{k}_1...,\ensuremath{k}_n,\ensuremath{{t}})$ along phase-space trajectories $ \Gamma_i$ is constant for all times $ {t}$  [54]. With the total derivation of the many particle distribution function, the Liouville theorem can be expressed as

$\displaystyle \ensuremath{\ensuremath{\frac{\mathrm{d} \ensuremath{f}}{\mathrm{...
...suremath{\ensuremath{\partial_{\ensuremath{k}_i} \ensuremath{f}}}\right)=0{\;}.$ (1.21)

Due to the Hamilton equations

$\displaystyle \ensuremath{\ensuremath{\frac{\mathrm{d} \ensuremath{\ensuremath{...
...}}=\ensuremath{\ensuremath{\mathitbf{\nabla_{\!p}}}}{\ensuremath{{\cal{H}}}}   $and$\displaystyle   \ensuremath{\ensuremath{\frac{\mathrm{d} \ensuremath{\ensurema...
...-\ensuremath{\ensuremath{\mathitbf{\nabla_{\!r}}}}{\ensuremath{{\cal{H}}}}{\;},$ (1.22)

and the Poisson bracket defined as

$\displaystyle \ensuremath{\{\ensuremath{r},\ensuremath{k}\}}=\ensuremath{\sum_{...
...\ensuremath{\ensuremath{\mathitbf{r}}} _i} \ensuremath{{\cal{H}}}}}\right){\;},$ (1.23)

the Liouville equation (1.21) can be written in a very compact form as

$\displaystyle \ensuremath{\ensuremath{\partial_{t} \ensuremath{f}}}+\ensuremath{\{\ensuremath{f},\ensuremath{{\cal{H}}}\}}=0{\;}.$ (1.24)

Equation (1.24) has to be solved in the $ \mathbb{R}^{2dM}\ensuremath{\times}{\mathbb{R}}$ space, where $ \mathrm{M}$ is the number of particles and $ \mathrm{d}$ is a dimension factor. The initial condition is defined as

$\displaystyle \ensuremath{f}\left(\ensuremath{r},\ensuremath{k},0\right)=\ensuremath{f}\left(\ensuremath{r}_0,\ensuremath{k}_0\right){\;},   $  $\displaystyle   \left(\ensuremath{r},\ensuremath{k}\right)\ensuremath{\in}\mathbb{R}^{2dM}{\;}.$ (1.25)

$ \mathrm{M}$ is naturally very large and therefore the solution of (1.24) is very expensive.

To derive a lower-dimensional equation, Vlasov introduced a single particle Liouville equation with a force $ \ensuremath{\ensuremath{\mathitbf{F}}_\mathrm{eff}}$  [55]

$\displaystyle \ensuremath{\ensuremath{\partial_{t} \ensuremath{f(\ensuremath{\e...
...{\ensuremath{\mathitbf{r}}} ,\ensuremath{\ensuremath{\mathitbf{k}}},t)}}=0{\;}.$ (1.26)

Many-particle physics is taken into account in Vlasov's equation through the force $ \ensuremath{\ensuremath{\mathitbf{F}}_\mathrm{eff}}$ and the assumption that the probability to occupy a state along phase trajectories is constant. The force can be split into an external and a long-range interaction force. However, the main disadvantage of Vlasov's equation is that it does not provide a description of strong short-range forces such as scattering of particles with other particles or with the crystal. So, an extended Vlasov equation must be formulated to treat these important transport effects. Introducing the scattering operator $ \ensuremath{Q_{\mathrm{coll}}}\left(\ensuremath{f}\right)$ , the balance equation for the distribution function must fulfill the conservation equation

$\displaystyle \ensuremath{\ensuremath{\frac{\mathrm{d} \ensuremath{f(\ensuremat...
...{\mathrm{d} t}}}=\ensuremath{Q_{\mathrm{coll}}}\left(\ensuremath{f}\right){\;}.$ (1.27)

Hence, scattering allows particles to jump from one trajectory to another (see Fig. 1.6).
Figure 1.6: Scattering event from one trajectory to another in phase space. Scattering events, which are assumed to happen instantly, change the carrier's momentum, while the position is not affected (after [1]).
\includegraphics[width=0.5\textwidth]{figures/svg/trajectories.eps}
With the full derivative of the distribution function and equation (1.22), the Boltzmann's transport equation (1.27) can be finally expressed in the common form as

$\displaystyle \ensuremath{\ensuremath{\partial_{t} \ensuremath{f}}} +\ensuremat...
...{\ensuremath{f}}=\ensuremath{Q_{\mathrm{coll}}}\left(\ensuremath{f}\right){\;}.$ (1.28)

Equation (1.28) is a semi-classical equation containing Newton mechanics on the left side, and the quantum mechanical scattering operator on the right side. Still, it remains to formulate an expression for the scattering operator  $ \ensuremath{Q_{\mathrm{coll}}}$ . There exist many strategies for modeling the scattering operator [16,4]. To develop solution strategies for the semi-classical differential equation, it is important to discuss the underlying limitations and assumptions of the BTE:
$ \bullet$ The original many particle problem is replaced by a one particle problem with an appropriate potential. Due to the Hartree-Fock approximation [56], the contribution of the surrounding electrons to this potential is approximated by a charge density. Furthermore, the short range electron-electron interaction cannot be described. However, the potential of the surrounding carriers is treated by the electric field self-consistently.
$ \bullet$ The distribution function  $ \ensuremath{f(\ensuremath{\ensuremath{\mathitbf{r}}} ,\ensuremath{\ensuremath{\mathitbf{k}}},t)}$ is a classical concept due to the negligence of Heisenberg's uncertainty principle. The distribution function specifies both the position and the momentum at the same time.
$ \bullet$ Due to the uncertainty principle, the mean free path of the particles must be longer than the mean De Broglie wavelength.
$ \bullet$ Semi-classical treatment of carriers as particles obey Newton's law.
$ \bullet$ Collisions are assumed to be binary and to be instantaneous in time and local in space.
It is important to have these limitations in mind, while deriving transport models based on the BTE. However, as has been reported in many publications [57,58,59,60], models based on the BTE give good results in the scattering dominated regime. Thus, it is a good starting point for simpler macroscopic transport models such as the drift-diffusion, the hydrodynamic, or even higher-order models.


1.2 Band Structure

The focus of this section is put on the band structure of bulk silicon and the occurrence of subbands. Furthermore, results are presented, which show the influence of the band structure on the transport properties.

1.2.1 Bulk

Carriers in a crystal are moving in a periodic potential energy $ \ensuremath{V_{\mathrm{periodic}}(\ensuremath{\ensuremath{\mathitbf{r}}} )}$ . Due to this periodic potential, the solution of the time-independent $ \textsl{Schr\uml odinger}$ equation

$\displaystyle \left(-\frac{\hbar ^2}{2\ensuremath{m}}\ensuremath{\ensuremath{\e...
...ath{\mathcal{E}}\ensuremath{\phi(\ensuremath{\ensuremath{\mathitbf{r}}} )}{\;},$ (1.29)

are the so called Bloch waves expressed as [61,62]

$\displaystyle \ensuremath{\psi}_\ensuremath{\ensuremath{\mathitbf{k}}}\left(\en...
...{\ensuremath{\mathitbf{k}}}\ensuremath{\ensuremath{\mathitbf{r}}} }\right){\;}.$ (1.30)

The boundary condition

$\displaystyle \ensuremath{u}_\ensuremath{\ensuremath{\mathitbf{k}}}\left(\ensur...
...{\ensuremath{\mathitbf{k}}}\left(\ensuremath{\ensuremath{\mathitbf{r}}} \right)$ (1.31)

must hold, with $ a$ as the lattice constant. Inserting the Bloch waves (1.30) into equation (1.29) the Schrödinger equation can be written as

$\displaystyle \left(\frac{1}{\ensuremath{m}}\left(\frac{\hbar }{\ensuremath{i}}...
...\ensuremath{\ensuremath{\mathitbf{k}}}\right)\ensuremath{u}_\ensuremath{k}{\;}.$ (1.32)

The so called $ \ensuremath{\ensuremath{\mathitbf{k}}}\cdot\ensuremath{\ensuremath{\mathitbf{p}}}$ method gives approximate solutions to (1.32) [63,64]. Several other methods such as pseudo-potential calculations [65,66], tight binding [67], Hartree-Fock [68], and density functional theory [69] have been proposed to calculate the full band structure within the first Brillouin zone.

If the band structure is already given, $ \ensuremath{\mathcal{E}}\left(\ensuremath{\ensuremath{\mathitbf{k}}}\right)$ can be expanded around the band edge minimum into a Taylor series as

$\displaystyle \ensuremath{\mathcal{E}}\left(\ensuremath{\ensuremath{\mathitbf{k...
...f{k}}}_\ensuremath{l}\ensuremath{\ensuremath{\mathitbf{k}}}_\ensuremath{m}{\;}.$ (1.33)

Here the Taylor series is truncated after the second derivative. The energy minimum is assumed at $ \ensuremath{\ensuremath{\mathitbf{k}}}=0$ following that the term with the first derivative can be neglected. Thus the energy can be expressed as

$\displaystyle \ensuremath{\mathcal{E}}\left(\ensuremath{\ensuremath{\mathitbf{k...
...f{k}}}_\ensuremath{l}\ensuremath{\ensuremath{\mathitbf{k}}}_\ensuremath{m}{\;}.$ (1.34)

With a comparison between equation (1.34) and the energy dispersion relation

$\displaystyle \ensuremath{\mathcal{E}}\left(\ensuremath{\ensuremath{\mathitbf{k...
...t)=\frac{\hbar \ensuremath{\ensuremath{\mathitbf{k}}}^2}{2\ensuremath{m}}{\;},$ (1.35)

the inverse effective mass tensor can be written as

$\displaystyle \frac{1}{\ensuremath{{m_{\ensuremath{l}\ensuremath{m}}^*}}}=\frac...
...ensuremath{\mathcal{E}}\left(\ensuremath{\ensuremath{\mathitbf{k}}}\right){\;}.$ (1.36)

So electrons in a crystal can be assumed as free particles with a direction dependent mass. For silicon, the effective mass yields [44]

$\displaystyle \frac{1}{\ensuremath{{m_{\ensuremath{l}\ensuremath{m}}^*}}} = \be...
...{\ensuremath{m}_y} & 0 \ 0 & 0 & \frac{1}{\ensuremath{m}_z} \end{pmatrix}{\;}.$ (1.37)

Furthermore, the effective mass of cubic semiconductors depends on the crystallographic orientation of the applied field. With the so called longitudinal mass or heavy hole mass  $ \ensuremath{m^*_\mathrm{l}}$ and the transversal mass or light hole mass $ \ensuremath{m^*_\mathrm{t}}$ , the energy dispersion relation can be defined as

$\displaystyle \ensuremath{\mathcal{E}}\left(\ensuremath{\ensuremath{\mathitbf{k...
...\frac{\ensuremath{k}_\ensuremath{t}^2}{\ensuremath{m^*_\mathrm{t}}}\right){\;}.$ (1.38)

Equation (1.38) is a band with ellipsoidal constant energy surfaces as depicted in Fig. 1.7.
Figure 1.7: Energy ellipsoids of the first conduction band within the first Brillouin zone of silicon (after [2]).
\includegraphics[width=0.5\textwidth]{figures/svg/band_structure_si.eps}
These are the six valleys of the first conduction band of silicon.

Due to the truncation after the second-order derivative of the Taylor series, the effective mass approximation is only valid for low fields. Thus, the assumption of parabolic bands is not valid anymore for high fields. With the introduction of a non-parabolicity factor $ \alpha$ , as proposed by Kane [70,47,71], the parabolic dispersion relation (1.35) can be rewritten into a first order correction

$\displaystyle \ensuremath{\mathcal{E}}\left(1+\ensuremath{\alpha}\ensuremath{\m...
...\frac{\ensuremath{k}_\ensuremath{t}^2}{\ensuremath{m^*_\mathrm{t}}}\right){\;}.$ (1.39)

A direct consequence of the band structure is the density of states, which describes the energetical density of electronic states per volume [1]

$\displaystyle \ensuremath{D(\ensuremath{\mathcal{E}})}=\frac{1}{\ensuremath{V}}...
...left(\ensuremath{\ensuremath{\ensuremath{\mathitbf{k}}}^{'}}\right)\right){\;}.$ (1.40)

In the parabolic band approximation the density of states for one, two, and three-dimensions reads as

$\displaystyle \ensuremath{D_\mathrm{1D}(\ensuremath{\mathcal{E}})}=\frac{2\ensu...
...{m^*}}}{\pi\hbar }\frac{1}{\sqrt{2\ensuremath{{m^*}}\ensuremath{\mathcal{E}}}} $$\displaystyle \ensuremath{D_\mathrm{2D}(\ensuremath{\mathcal{E}})}=\frac{\ensuremath{{m^*}}}{\pi\hbar ^2} $    , and  $\displaystyle  \ensuremath{D_{\mathrm{3D}}(\ensuremath{\mathcal{E}})}=\frac{\e...
...h{{m^*}}\sqrt{2\ensuremath{{m^*}}\ensuremath{\mathcal{E}}}}{\pi^2\hbar ^3}{\;}.$ (1.41)

For the non-parabolic dispersion relation (1.39) the density of states can be formed as

$\displaystyle \ensuremath{D_\mathrm{1D}(\ensuremath{\mathcal{E}})}=\frac{2\ensu...
...{m^*}}}{\pi\hbar }\frac{1}{\sqrt{2\ensuremath{{m^*}}\ensuremath{\mathcal{E}}}} $$\displaystyle \ensuremath{D_\mathrm{2D}(\ensuremath{\mathcal{E}})}=\frac{\ensu...
...m^*}}}{\pi\hbar ^2}\left(1+2\ensuremath{\alpha}\ensuremath{\mathcal{E}}\right) $ , and   (1.42)
$\displaystyle \ensuremath{D_{\mathrm{3D}}(\ensuremath{\mathcal{E}})}=\frac{\en...
...cal{E}}\right)}\left(1+2\ensuremath{\alpha}\ensuremath{\mathcal{E}}\right){\;}.$ (1.43)

Note that in the two-dimensional case for non-parabolic bands the density of states is energy dependent, whereas for a parabolic band structure the density of states is energy independent. Equation (1.39) is valid up to energies of  $ {\mathrm{1}}{\;}{\mathrm{eV}}$  [1]. Therefore, to model high-field transport, a more sophisticated description has to be found. One possibility, which is used in this work, is to calculate non-parabolic factors as a post processing step from Monte Carlo simulations. This procedure will be explained in Section 1.4.3.

1.2.2 Subbands

In Section 1.1.2, an introduction of the Schrödinger-Poisson loop including inversion layer effects has been given. In real devices within the crystallographic orientation $ [001]$ , the valley, which has its longitudinal mass perpendicular to the interface surface, gives rise to a ladder of subbands, the so called unprimed valley, whereas the other valleys give rise to an other, higher lying ladder in the primed and double primed valleys [72]. It has been pointed out in [47] that by inserting the mass tensor (1.37) into the Schrödinger equation, the energy dispersion relation of the orientation $ [001]$ can be described as

$\displaystyle \ensuremath{\mathcal{E}}\left(\ensuremath{k_x},\ensuremath{k_y}\r...
...}_x}\ensuremath{k_x}^2+\frac{1}{\ensuremath{m}_y}\ensuremath{k_y}^2\right){\;},$ (1.44)

with $ \ensuremath{z}$ as the quantization direction. $ \ensuremath{\mathcal{E}}_\mathrm{i}$ is the bottom energy of the ith subband. Equation (1.44) represents constant energy-parabolas above  $ \ensuremath{\mathcal{E}}_\mathrm{i}$ , the so called subband ladders. Inserting the corresponding longitudinal mass or the transversal mass of the valley into equation (1.44), one yields the subband ladders of the unprimed, primed, and double primed valleys, respectively.

Fig. 1.8 shows the subband ladders of the unprimed and primed valleys. Since the double primed and the primed valleys have the same energy subband values, due to the identical quantization mass, only the primed ladders are visualized. Due to the fact that the energy is inversely proportional to the quantization mass the energies of the primed ladders are higher than the ones from the unprimed ladders [73]. The quantization mass of the unprimed and primed valleys are  $ m^*_\mathrm{l}$ and  $ m^*_\mathrm{t}$ , respectively. The subband occupations within high fields have got a strong influence on the carrier transport properties as demonstrated in Fig. 1.9 and Fig. 1.10

In Fig. 1.9, the subband occupation as a function of the driving field of an example device is presented. Carriers gain kinetic energy, which results in a re-occupation of higher subband ladders. Due to this re-occupation for high fields, the carrier wavefunctions are shifted within the inversion layer, which inherently affects the overlap integral of the scattering operator. Furthermore, the subband ladder reconfiguration leads to a variation of the spatial distribution function of the electrons, which itself has an impact on the shape of the potential well that forms the inversion channel.

Figure 1.8: Unprimed and primed subband ladders
\includegraphics[width=0.5\textwidth]{figures/svg/subband_ladder.eps}
Figure 1.9: Populations of the first two subbands in the unprimed, primed, and double primed valleys versus lateral field in a UTB SOI MOSFET test device. Relative occupations are shifted to higher subbands in each valley for higher fields.
\includegraphics[width=0.5\textwidth]{rot_figures_left/conferences/sispad2007/pop1_2_color.ps}

The carrier velocity of the first and the second subband is displayed in Fig. 1.10.

Figure 1.10: Velocities of the first and second subband of the unprimed, primed, and double primed valleys as well as the average total velocity versus lateral electric field.
\includegraphics[width=0.5\textwidth]{rot_figures_left/conferences/sispad2007/vel_color.ps}
Due to different conduction masses in transport direction of each valley as well as a strong occupation of the primed valley in the high field regime (see Fig. 1.9), the total velocity is lower than in the unprimed and double primed valley [50]. A detailed discussion about subbands is given in [47].

1.3 Quantum Transport

An introduction of the most common quantum transport models, namely the non-equilibrium Green's function method, the Wigner Monte Carlo technique, and quantum macroscopic models is given, followed by a discussion of quantum correction models suitable for semi-classical models.


1.3.1 Non-Equilibrium Green's Function Method

The non-equilibrium Green's function (NEGF) is a very powerful technique to describe open systems fully quantum mechanically.

This method has been extensively used in modeling nanoscale transistors and is an efficient way of computing quantum effects in devices as subband quantization, tunneling, and quantum reflections. It is exact in the ballistic regime. Recently scattering processes have been included, which, however, requires considerable computational power. Furthermore NEGF allows to study the time evolution of a many-particle open quantum system. The many-particle information about the system is set into self-energies, which are parts of the equations of motion for the Green's functions. Green's functions can be calculated from perturbation theory [74]. The NEGF technique is a sophisticated method to determine the properties of a many-particle system both in thermodynamic equilibrium as well as in non-equilibrium situations. In the sequel, a description of the quantum transport method for the single particle model is given. For open systems with a coupling to a reservoir, the Hamiltonian, which describes the quantum system, can be expressed as [31,75]

$\displaystyle \begin{pmatrix}\ensuremath{{\cal{H}}}& \ensuremath{{\sigma}}\ \ensuremath{{\sigma}}^{\dagger} & \ensuremath{{\cal{H}}}_R \ \end{pmatrix}{\;},$ (1.45)

with $ \ensuremath{{\cal{H}}}$ and $ \ensuremath{{\cal{H}}}_R$ as the Hamilton operators of the contact and the channel respectively, and $ {\sigma}$ denoting the coupling matrix. Hence, the Schrödinger equation of the channel-contact system can be written as [76]

$\displaystyle \ensuremath{\mathcal{E}} \begin{pmatrix}\ensuremath{\psi}\ \ensu...
...trix} \begin{pmatrix}\ensuremath{\psi}\ \ensuremath{\phi}\ \end{pmatrix}{\;}.$ (1.46)

Here, $ \ensuremath{\psi}$ and $ \ensuremath{\phi}$ are the wavefunctions of the channel and the contact, respectively. The Green's function equation is defined as

$\displaystyle \left(\ensuremath{\ensuremath{{\mathrm{\hat{I}}}}}\ensuremath{\ma...
...th{{\cal{H}}}\right)\ensuremath{G}=\ensuremath{\ensuremath{{\mathrm{\hat{I}}}}}$ (1.47)

and therefore the corresponding Green's function to (1.46) can be expressed as [31]

$\displaystyle \begin{pmatrix}\ensuremath{G}& \ensuremath{G_{\mathrm{dR}}}\ \en...
...remath{{\mathrm{\hat{I}}}}}-\ensuremath{{\cal{H}}}_R \ \end{pmatrix}^{-1}{\;},$ (1.48)

where  $ \ensuremath{G_{\mathrm{dR}}}$ and $ \ensuremath{G_{\mathrm{Rd}}}$ describe the coupling between device and the reservoir, whereas $ \ensuremath{G_{\mathrm{RR}}}$ is the Green's function of the reservoir itself. The retarded Green's function $ G$ can be written as

$\displaystyle \ensuremath{G}=\left(\ensuremath{\mathcal{E}}\ensuremath{\ensurem...
...{H}}}-\ensuremath{\Sigma}\left(\ensuremath{\mathcal{E}}\right)\right)^{-1}{\;}.$ (1.49)

$ \ensuremath{\Sigma}$ is the energy dependent self-energy and describes the interaction between the device and the reservoir [77,78,79]. Thus, the advantage of the self-energy is that it reduces the Green's function of the reservoir to the dimension of the device Hamiltonian. The self-energy can be obtained as

$\displaystyle \ensuremath{\Sigma}=\ensuremath{{\sigma}}\ensuremath{G}\ensuremath{{\sigma}}^{\dagger}{\;},$ (1.50)

and is usually determined iteratively. The spectral function $ A$ can be written as

$\displaystyle \ensuremath{A}=\ensuremath{i}\left(\ensuremath{G}\left(\ensuremat...
...\right)-\ensuremath{G}^\dagger\left(\ensuremath{\mathcal{E}}\right)\right){\;}.$ (1.51)

$ \ensuremath{A}/2\pi$ is the matrix form of the density of states  $ D(\ensuremath{\mathcal{E}})$ . Finally, the density matrix can be expressed as

$\displaystyle \ensuremath{\rho}=\frac{1}{2\pi}\ensuremath{\int\limits_{0}^{\inf...
...th{\mathcal{E}}\right)\ensuremath{{\;}\mathrm{d}}\ensuremath{\mathcal{E}}}{\;},$ (1.52)

which provides the charge distribution in the channel. $ \ensuremath{f}$ is the Fermi distribution function as explained in Section 1.1.3 and $ \ensuremath{\mathcal{E}}_\mathrm{f}$ is the Fermi energy.
Assuming that the device is being connected to two contacts with different Fermi energies and hence also different Fermi functions, the current in the ballistic regime can be obtained as [78]

$\displaystyle \ensuremath{I}=-\frac{2\mathrm{q}}{\mathrm{h}}\ensuremath{\int\li...
...)-\ensuremath{f}\left({\ensuremath{\ensuremath{\mathcal{E}}_{f2}}}\right)){\;}.$ (1.53)

$ \ensuremath{T(\ensuremath{\mathcal{E}})}$ is the transmission coefficient indicating the probability, that an electron with the energy  $ \ensuremath{\mathcal{E}}$ can travel from the source to the drain and is defined as

$\displaystyle \ensuremath{T(\ensuremath{\mathcal{E}})}={{\;}\mathrm{tr}\left(\e...
...}_2\right)}={{\;}\mathrm{tr}\left(\ensuremath{\Gamma_2}\ensuremath{A}_1\right)}$ (1.54)

with

$\displaystyle \ensuremath{\Gamma_{1,2}}=\ensuremath{i}\left(\ensuremath{\Sigma}_{1,2}-\ensuremath{\Sigma}^{\dagger}_{1,2}\right)$ (1.55)

as the coupling of the channel to the reservoir.

1.3.2 Wigner Monte Carlo

The MC technique is a well established and accurate numerical method to solve the BTE. Due to the similarities between the Wigner equation written as [80,81]

$\displaystyle \ensuremath{\ensuremath{\partial_{t} \ensuremath{f}_{\mathrm{w}}}...
...hrm{w}}}=\ensuremath{Q_{\mathrm{coll}}}\left(\ensuremath{f}_{\mathrm{w}}\right)$ (1.56)

and the BTE (see equation (1.28)), it is tempting to solve the Wigner equation with the MC technique [82,83,33] as well. $ \ensuremath{V_w(\ensuremath{\ensuremath{\mathitbf{r}}} )}$ denotes an external Wigner potential. The Wigner function $ \ensuremath{f}_{\mathrm{w}}$ can be derived from the density matrix expressed by the Liouville von Neumann equation using the Wigner-Weyl transformation [84]. With the Fourier transformation of the product of wavefunctions at two points [85], the Wigner function can be expressed as

$\displaystyle \ensuremath{f}_{\mathrm{w}}(\ensuremath{\ensuremath{\mathitbf{r}}...
...remath{\ensuremath{\mathitbf{r}}} '\ensuremath{\ensuremath{\mathitbf{k}}}}{\;}.$ (1.57)

The Wigner function is a quantum mechanical description in phase-space, which is, however no longer positive definite. Hence, it cannot be regarded as a distribution function directly, but observables need to be derived from it. In the literature this is known as the negative-sign problem [86,87].
An important feature of this so called phase-space approach is the ability of expressing quantum mechanical expectation values in the same way as it is done in classical statistical mechanics.
Furthermore, the Wigner equation can be used as a base for quantum macroscopic transport models as the quantum drift-diffusion or the quantum hydrodynamic model using the method of moments.


1.3.3 Quantum Macroscopic Models

Quantum macroscopic models can be derived from a fluid dynamical view using the Madelung transformation for the wave function $ \ensuremath{\psi}$ defined as [39]

$\displaystyle \ensuremath{\psi}=\sqrt{\ensuremath{n}}\exp\left(\frac{\ensuremath{i}\ensuremath{m}\ensuremath{S}}{\hbar }\right){\;},$ (1.58)

The Madelung transformation states that the wave function can be decomposed in its amplitude  $ \sqrt{\ensuremath{n}}$ and phase  $ \ensuremath{S}$ , whereby the amplitude is defined as the square root of the particle density. $ \ensuremath{i}$ is referred here to the complex number $ \sqrt{-1}$ , whereas $ \ensuremath{m}$ denotes the carrier mass. Since the electron density of a single state is defined as [39]

$\displaystyle \ensuremath{n}_\mathrm{i}=\mathrm{\vert}\ensuremath{\psi}_\mathrm{\mathrm{i}}\mathrm{\vert}^2{\;},$ (1.59)

and the density is by definition positive, the Madelung transformation makes only sense as long as $ \ensuremath{n}>0$ is valid [53].

Quantum macroscopic models can be derived from the Wigner equation as well using the method of moments. Since macroscopic transport models based on the BTE are derived using the method of moments (see Section 1.4.1), only the derivation of quantum macroscopic models using the Madelung transformation is pointed out here. Inserting equation (1.58) into equation (1.11), the current density  $ \ensuremath{\mathitbf{J}}$ can be written as

$\displaystyle \ensuremath{\ensuremath{\mathitbf{J}}}=-\mathrm{q}\ensuremath{n}\ensuremath{\ensuremath{\ensuremath{\mathitbf{\nabla_{\!r}}}}}\ensuremath{S}{\;}.$ (1.60)

The phase $ \ensuremath{S}$ of equation (1.58) can be interpreted as a velocity potential. Inserting equation (1.58) into the Schrödinger equation (1.7) yields

$\displaystyle \ensuremath{i}\hbar \ensuremath{\ensuremath{\partial_{t} \left}}(...
...ft(\frac{\ensuremath{i}\ensuremath{m}\ensuremath{S}}{\hbar }\right)\right){\;}.$    

Dividing equation (1.61) with $ \exp\left(\frac{\ensuremath{i}\ensuremath{m}\ensuremath{S}}{\hbar }\right)$ leads to

$\displaystyle \ensuremath{i}\hbar \left(\ensuremath{\ensuremath{\partial_{t} (}...
...ensuremath{\ensuremath{\mathitbf{\nabla_{\!r}}}}}\ensuremath{S}\right)^2\right.$ (1.61)
$\displaystyle +\left.\frac{\ensuremath{i}\ensuremath{m}}{\hbar }\sqrt{\ensurema...
...nsuremath{V(\ensuremath{\ensuremath{\mathitbf{r}}} )}\sqrt{\ensuremath{n}}{\;}.$    

With the imaginary part, one can obtain the particle conservation equation as

$\displaystyle \ensuremath{\ensuremath{\partial_{t} \ensuremath{n}}} =-2\sqrt{\e...
...th{\mathitbf{\nabla_{\!r}}}}}\cdot{\ensuremath{\ensuremath{\mathitbf{J}}}}{\;},$ (1.62)


$\displaystyle \ensuremath{\ensuremath{\partial_{t} \ensuremath{n}}} +\frac{1}{\...
...emath{\mathitbf{\nabla_{\!r}}}}}{\ensuremath{\ensuremath{\mathitbf{J}}}}=0{\;}.$ (1.63)

The real part yields

$\displaystyle \ensuremath{\ensuremath{\partial_{t} \ensuremath{S}}} =\frac{\hba...
...q}}{\ensuremath{m}}\ensuremath{V(\ensuremath{\ensuremath{\mathitbf{r}}} )}{\;}.$ (1.64)

With the gradient and a multiplication of (1.64) with $ -\mathrm{q}\ensuremath{n}$ , one obtains the quantum conservation equation of the current

$\displaystyle \ensuremath{\ensuremath{\partial_{t} \ensuremath{\ensuremath{\mat...
...athitbf{\nabla}}}^2}\sqrt{\ensuremath{n}}}{\sqrt{\ensuremath{n}}}\right)=0{\;}.$ (1.65)

Equations (1.63) and  (1.65) are referred to as the quantum hydrodynamic equations [39]. With `` $ \hbar \rightarrow 0$ '', the quantum conservation equation (1.65) turns into the classical current conservation equation. The advantage of this method is that in two or three space dimensions fluid-dynamical models are numerical cheaper compared to the Schrödinger equation. Furthermore, boundary conditions can be more easily applied compared to the Schrödinger formulation. However, the dispersive character of the quantum hydrodynamic transport system implies that the solution may develop high frequency oscillations, which are localized in regions not a priori known. Therefore, the numerical simulations with quantum hydrodynamic models require an extremely high number of grid points, which leads to unnecessarily time consuming computations [88].


1.3.4 Quantum Correction Models

Since transport parameters as for instance the carrier mobility of modern semiconductor devices are strongly influenced by quantum mechanical effects, it is essential to take quantum correction models within classical simulations into account. Several quantum correction models based on different approaches have been proposed, which influence the electrostatics of the system.

The quantum correction model modified local density approximation (MLDA) [89] is based on a local correction of the effective density of states  $ \ensuremath{N_\mathrm{c}}$ near the gate oxide as

$\displaystyle \ensuremath{N_\mathrm{c}}=\ensuremath{N_\mathrm{c,0}}\left(1-\exp...
...{\ensuremath{\chi}^2\ensuremath{\lambda_{\mathrm{thermal}}}^2}\right)\right)   $ with  $\displaystyle   \ensuremath{\lambda_{\mathrm{thermal}}}=\frac{\hbar }{\sqrt{2\ensuremath{m}k_\ensuremath{\mathrm{B}}\ensuremath{T}}}{\;}.$ (1.66)

$ \ensuremath{N_\mathrm{c,0}}$ is here the classical effective density of states with $ \ensuremath{\chi}$ as a fitting parameter. $ \ensuremath{z}$ is the distance from the interface, $ \ensuremath{z_0}$ is the tunneling distance, and $ \ensuremath{\lambda_{\mathrm{thermal}}}$ denotes the thermal wavelength. The correction term of equation (1.66) can be calculated from the quantum mechanical expression of the particle density as stated in [90]. The advantage of the MLDA procedure is that no solution variable is used in the correction term. Therefore, the model can be implemented as a preprocessing step and has only a minor impact on the overall CPU time [91]. However, this approach is based on the field-free Schrödinger equation. The method loses its validity for high-fields. An improved MLDA technique has been suggested in [92,93]. A heuristic wavelength parameter has been introduced as

$\displaystyle \ensuremath{\lambda'_{\mathrm{thermal}}(\ensuremath{z},\ensuremat...
...}},\ensuremath{T})}\ensuremath{\lambda_{\mathrm{thermal}}(\ensuremath{T})}{\;},$ (1.67)

where $ \ensuremath{N_{\mathrm{eff}}}$ is defined as the net doping $ \ensuremath{N_{\mathrm{eff}}}=\ensuremath{N}_\mathrm{a}-\ensuremath{N}_\mathrm{d}$ with $ \chi(\ensuremath{z},\ensuremath{N_{\mathrm{eff}}},\ensuremath{T})$ as a fit factor. As pointed out in [92], the improved MLDA can now cover the important case of high fields perpendicular to the interface. The fit parameters have been matched with the results of a self-consistent Schrödinger-Poisson solver. The model is calibrated for bulk MOSFET structures. However, the MLDA method is only valid for devices with one oxide. Therefore, a characterization of quantization in DG MOSFETs is not possible.

A quantum correction technique to cover such devices is presented in [94]. The idea behind this model is as follows: The strong quantization perpendicular to the interface can be well approximated with an infinite square well potential. The eigenstates within the quantization region are estimated using an analytical approach. This assumption allows to determine a quantum correction potential which modifies the band edge to reproduce the quantum mechanical carrier concentration.

In [95], the correction is carried out by a better modeling of the conduction band edge as

$\displaystyle \ensuremath{\ensuremath{\mathcal{E}}_{\ensuremath{\mathrm{c}}}}=\...
...c{13}{9} \mathrm{F}(\ensuremath{z}) \Delta\ensuremath{\mathcal{E}}_\mathrm{g}  $ with  $\displaystyle  \Delta\ensuremath{\mathcal{E}}_\mathrm{g}\approx\beta\left(\fra...
...ath{\mathrm{B}}\ensuremath{T}}\right)^{1/3}\vert\mathrm{E}_\bot\vert^{2/3}{\;}.$ (1.68)

$ \ensuremath{\ensuremath{\mathcal{E}}_{\mathrm{class}}}$ is the classical band edge energy, the correction $ \mathrm{F}$ is a function of the distance to the interface, and $ \mathrm{E}_\bot$ is the electric field perpendicular to the interface. The value of the proportionality factor can be determined from the shift of the long-channel threshold voltage as explained in [95].

Fig. 1.11 shows the electron concentration calculated for a single gate SOI MOSFET classically, quantum mechanically, and with the quantum correction models MLDA, the model after [95], and the improved modified local density approximation (IMLDA)[92]. A gate voltage of  $ {\mathrm{1}}{\;}{\mathrm{V}}$ has been applied, and the quantum electron concentration has been calculated using a Schrödinger-Poisson solver [2]. As can be observed, the electron concentration obtained from the IMLDA model fit the quantum mechanical simulation quite well compared to the other approaches. Therefore, the IMLDA model is used in this work to cover quantum effects in the classical device simulations.

Figure 1.11: The electron concentration of a single gate SOI MOSFET has been calculated classically, quantum-mechanically, together with the quantum correction models MLDA, Van Dort, and the improved MLDA (after [3]).
\includegraphics[width=0.5\textwidth]{rot_figures_left/simulation/quantum_correction/sg_ccn_1.0V/quantum_correction.ps}

1.4 Semi-Classical Transport

The main focus of this work is set on macroscopic transport models based on the BTE. First, the method to derive higher-order macroscopic models is described followed by a detailed derivation. Since the models must be benchmarked to other solution techniques of the BTE [96,97], a short introduction of the Monte Carlo method and the Spherical Harmonics Expansion approach is presented.


1.4.1 Method of Moments

On an engineering level, a very efficient way to find approximate solutions of the BTE is the method of moments. In order to formulate a set of balance and flux equations coupled with Poisson's equation, one has to multiply the BTE with a set of weight functions and integrate over  $ \ensuremath{\ensuremath{\mathitbf{k}}}$ -space.

An arbitrary number of equations can be derived, each containing information from the next-higher equation. Hence, there exists more moments than equations. Therefore, one has to truncate this equation hierarchy in order to get a fully defined equation-system. The assumption to close the system and to express the highest moment with the lower moments is called closure relation. The closure relation estimates the information of the higher-order moments and thus determines the accuracy of the system. For instance, in the case of the drift-diffusion model, the electrons are assumed to be in thermal equilibrium ( $ \ensuremath{T_\ensuremath{n}}=\ensuremath{T_{\mathrm{L}}}$ ) with the lattice [1]. There exist several theoretical approaches to cover the closure problem [98], like the maximum entropy principle [99,100,101] in the sense of extended thermodynamics.

The idea of the maximum entropy principle is that a large number of collisions is necessary to relax the carrier energies to their equilibrium, while the momentum, heat flow, and anisotropic stresses relax within a shorter time. Therefore, an intermediate state arises, where the fluid is in its own thermal equilibrium. This can be called partial thermal equilibrium. All transport parameters are zero except for the carrier temperature $ T_\ensuremath{n}$ . Another important assumption is that the entropy density and the entropy flux do not depend on the relative velocity of the electron gas. With the partial thermal equilibrium, closure relations can be found which are exactly those obtained with a shifted Maxwellian. A heated Maxwellian is used here as a closure for the hydrodynamic transport model and by the introduction of the kurtosis, the six moments model will be closed. A detailed description is given in the sequel.

To get physically reasonable equations, the weight functions are chosen as the powers of increasing orders of the momentum. The moments in one, two, and three dimensions, respectively, are defined as

$\displaystyle \ensuremath{x}_{j\ensuremath{d}}(\ensuremath{\ensuremath{\mathitb...
...\ensuremath{\ensuremath{\mathitbf{k}}}_\ensuremath{d}) \rangle \! \rangle}{\;}.$ (1.69)

$ \ensuremath{x}_j(\ensuremath{\ensuremath{\mathitbf{r}}} )$ denotes the macroscopic values together with the microscopic counterpart  $ \ensuremath{X}_j(\ensuremath{\ensuremath{\mathitbf{k}}})$ , where $ \ensuremath{f_\ensuremath{d}(\ensuremath{\ensuremath{\mathitbf{r}}_\ensuremath{d}},\ensuremath{\ensuremath{\mathitbf{k}}_\ensuremath{d}},t)}$ is the time dependent distribution function in the six-dimensional phase space.

$ \ensuremath{d}$ is linked to the one, two, and three-dimensional electron gas ( $ \ensuremath{d}=1$ , $ \ensuremath{d}=2$ , or $ \ensuremath{d}=3$ ), whereas $ \ensuremath{n}$ represents the carrier density.

For the sake of clarity, during the derivation of macroscopic transport models the dimension indices are neglected. Multiplying the BTE with the even scalar-valued weights $ \ensuremath{X}=\ensuremath{X}(\ensuremath{\ensuremath{\mathitbf{r}}} ,\ensuremath{\ensuremath{\mathitbf{k}}})$ and integrating over $ \ensuremath{\ensuremath{\mathitbf{k}}}$ -space

$\displaystyle \ensuremath{\int \ensuremath{X}\ensuremath{\ensuremath{\partial_{...
...suremath{\partial_{t} \ensuremath{X}}} \rangle \! \rangle}_{\mathrm{coll}}{\;},$ (1.70)

yields the general conservation equations. In the following derivations, the distribution  $ \ensuremath{f(\ensuremath{\ensuremath{\mathitbf{r}}} ,\ensuremath{\ensuremath{\mathitbf{k}}},t)}$ , the group velocity  $ \ensuremath{\ensuremath{\mathitbf{v(\ensuremath{\ensuremath{\mathitbf{k}}},\ensuremath{\ensuremath{\mathitbf{r}}} )}}}$ , and the generalized force  $ \ensuremath{\ensuremath{\mathitbf{F(\ensuremath{\ensuremath{\mathitbf{k}}},\ensuremath{\ensuremath{\mathitbf{r}}} )}}}$ are denoted as $ \ensuremath{f}$ , $ \ensuremath{\ensuremath{\mathitbf{v}}}$ , and $ \ensuremath{\ensuremath{\mathitbf{F}}}$ .

The first term on the left side of equation (1.70) leads to

$\displaystyle \ensuremath{\int \ensuremath{X}\ensuremath{\ensuremath{\partial_{...
...al_{t} \ensuremath{\langle \! \langle \ensuremath{X} \rangle \! \rangle}}}{\;},$ (1.71)

whereas the second term yields

$\displaystyle \ensuremath{\int \ensuremath{X}\ensuremath{\ensuremath{\mathitbf{...
...{\nabla_{\!r}}}}}\ensuremath{X} {\;}{\;}\mathrm{d}^3 \ensuremath{\mathitbf{k}}}$ (1.72)

and the third term

$\displaystyle \ensuremath{\int \ensuremath{X}\ensuremath{\ensuremath{\mathitbf{...
...nsuremath{X}\ensuremath{f} {\;}{\;}\mathrm{d}^3 \ensuremath{\mathitbf{k}}}{\;}.$ (1.73)

Using Gauss'theorem and assuming that all surface integrals over the border of the Brioullin-Zone are equal to zero [102], the first term on the right side of equation (1.73) vanishes. Inserting $ \ensuremath{\ensuremath{\mathitbf{F}}}=-\ensuremath{\ensuremath{\ensuremath{\mathitbf{\nabla_{\!r}}}}}\ensuremath{{\cal{H}}}$ and $ \ensuremath{\ensuremath{\mathitbf{v}}}=\ensuremath{\ensuremath{\ensuremath{\mathitbf{\nabla_{\!p}}}}}\ensuremath{{\cal{H}}}$ with the Hamilton function $ \ensuremath{{\cal{H}}}$ given as

$\displaystyle \ensuremath{{\cal{H}}}=\pm\ensuremath{\ensuremath{\mathcal{E}}_{\...
...rm{q}\ensuremath{\tilde{\varphi}}(\ensuremath{\ensuremath{\mathitbf{r}}} ){\;},$ (1.74)

with $ \ensuremath{\mathrm{s}_{\alpha}}=-1$ for electrons and $ \ensuremath{\mathrm{s}_{\alpha}}=1$ for holes, into equation (1.72) and (1.73) respectively, leads to the BTE expressed by the averages of the even scalar-valued moment

$\displaystyle \ensuremath{\ensuremath{\partial_{t} \ensuremath{\langle \! \lang...
...suremath{\partial_{t} \ensuremath{X}}} \rangle \! \rangle}_{\mathrm{coll}}{\;}.$ (1.75)

Finally, the equation reads

$\displaystyle \ensuremath{\ensuremath{\partial_{t} \ensuremath{\langle \! \lang...
...suremath{\partial_{t} \ensuremath{X}}} \rangle \! \rangle}_{\mathrm{coll}}{\;}.$ (1.76)

Furthermore, the BTE for the odd vector-valued moments can be transformed analogously

$\displaystyle \ensuremath{\ensuremath{\partial_{t} \ensuremath{\langle \! \lang...
...{\ensuremath{\ensuremath{\mathitbf{\nabla_{\!r}}}}}\ensuremath{\tilde{\varphi}}$ (1.77)
$\displaystyle =\ensuremath{\langle \! \langle \ensuremath{\ensuremath{\partial_{t} \ensuremath{X}}} \rangle \! \rangle}_{\mathrm{coll}}{\;}.$    

Equations (1.76) and (1.77) are the starting points for the derivations of the conservation equations and fluxes of macroscopic transport models.

1.4.2 Modeling of the Scattering Operator

In order to get an analytical expression for the right hand side of equations (1.76) and (1.77) several approaches have been suggested in [103,16]. In this work, the macroscopic relaxation time approximation after Bløtekjær [104] is used to approximate the scattering operator of the BTE

$\displaystyle \ensuremath{\langle \! \langle \ensuremath{\ensuremath{\partial_{...
...\rangle}}{\ensuremath{{\tau_{\ensuremath{X}}}}\left(\ensuremath{f}\right)}{\;}.$ (1.78)

$ \ensuremath{{\tau_{\ensuremath{X}}}}\left(\ensuremath{f}\right)$ is the macroscopic relaxation time for the weight function $ \ensuremath{X}$ . $ \ensuremath{\langle \! \langle \ensuremath{X_\mathrm{0}} \rangle \! \rangle}$ is the average weight function in equilibrium. Since the relaxation time $ \ensuremath{{\tau_{\ensuremath{X}}}}\left(\ensuremath{f}\right)$ depends on the distribution function, equation (1.78) is not an approximation. With

$\displaystyle \ensuremath{{\tau_{\ensuremath{X}}}}\neq\ensuremath{{\tau_{\ensuremath{X}}}}\left(\ensuremath{f}\right)$ (1.79)

equation (1.78) turns into the macroscopic relaxation time approximation. Therefore, the relaxation times depends only on the moments of the distribution function. For the odd moments, the approximation yields

$\displaystyle \ensuremath{\langle \! \langle \ensuremath{\ensuremath{\partial_{...
...\ensuremath{\ensuremath{\mathitbf{x}}} }{\ensuremath{{\tau_\mathrm{odd}}}}{\;},$ (1.80)

and for the even moments one obtains

$\displaystyle \ensuremath{\langle \! \langle \ensuremath{\ensuremath{\partial_{...
...nsuremath{x}-\ensuremath{x_\mathrm{0}}}{\ensuremath{{\tau_\mathrm{even}}}}{\;}.$ (1.81)

The subscript odd and even is linked to whether the moment is even or odd.


1.4.3 Macroscopic Transport Models

A hierarchy of macroscopic transport models based on the equations (1.76) and (1.77) can be derived using the method of moments described above [105]. The first three even scalar valued moments are defined as the powers of the energy $ \ensuremath{\mathcal{E}}(\ensuremath{\ensuremath{\mathitbf{r}}} ,\ensuremath{\ensuremath{\mathitbf{k}}})$

$\displaystyle \ensuremath{X}^{\mathrm{even}} = \left(\ensuremath{\mathcal{E}}^0,\ensuremath{\mathcal{E}}^1, \ensuremath{\mathcal{E}}^2\right){\;},$ (1.82)

and the first three odd vector valued moments are defined as

$\displaystyle \ensuremath{X}^{\mathrm{odd}}= \left(\ensuremath{\ensuremath{\mat...
...1, \ensuremath{\ensuremath{\mathitbf{p}}}\ensuremath{\mathcal{E}}^2\right){\;}.$ (1.83)

In order to obtain the particle balance equation and the current equation, one has to insert the zeroth moment $ \ensuremath{\mathcal{E}}^0$ and the first moment $ \ensuremath{\ensuremath{\mathitbf{p}}}\ensuremath{\mathcal{E}}^0$ into equation (1.76) and (1.77), respectively. While in the particle balance equation the particle current remains as an unknown variable, the particle current equation comprises the average kinetic energy. With a heated Maxwellian and the diffusion approximation the powers of the average energy assuming a parabolic band structure can be expressed by the carrier temperature as

$\displaystyle \ensuremath{\langle \! \langle \ensuremath{\mathcal{E}}^i \rangle...
...!!}{2^i}\left(k_\ensuremath{\mathrm{B}}\ensuremath{T_\ensuremath{n}}\right)^i  $,$\displaystyle  \ensuremath{\langle \! \langle \ensuremath{\mathcal{E}}^i \rang...
...2D}} =i!\left(k_\ensuremath{\mathrm{B}}\ensuremath{T_\ensuremath{n}}\right)^i  $, and $\displaystyle   \ensuremath{\langle \! \langle \ensuremath{\mathcal{E}}^i \ran...
...t)!!}{2^i}\left(k_\ensuremath{\mathrm{B}}\ensuremath{T_\ensuremath{n}}\right)^i$ (1.84)

for the one, two, and three-dimensional electron gas, respectively. For instance, the average energy ($ i=1$ ) for the 3D case can be written as

$\displaystyle \ensuremath{\langle \! \langle \ensuremath{\mathcal{E}} \rangle \! \rangle}=\frac{3}{2}k_\ensuremath{\mathrm{B}}\ensuremath{T_\ensuremath{n}}{\;}.$ (1.85)

The drift-diffusion model is closed by the assumption of local thermal equilibrium, thus the carrier temperatures are set to the lattice temperature. The energy balance equation is introduced taking the second moment $ \ensuremath{\mathcal{E}}$ into account, where the energy flux remains as an unknown term. The third moment $ \ensuremath{\ensuremath{\mathitbf{v}}}\ensuremath{\mathcal{E}}$ describes exactly this energy flux. The transport model considering these first four moment equations is called the hydrodynamic transport model [4]. By considering additional moments $ \ensuremath{\mathcal{E}}^2$ and $ \ensuremath{\ensuremath{\mathitbf{p}}}\ensuremath{\mathcal{E}}^2$ , leads to the second-order temperature balance equation and to the second-order temperature flux. The so called six moments model is closed by introducing the kurtosis $ \ensuremath {\beta }$ describing the deviation of the current distribution function from the Maxwell distribution function [106].

The assumptions made during the derivation of the transport model are specified as follow:

Furthermore, the averages of the microscopic quantities are defined as $ \ensuremath{\ensuremath{\ensuremath{w}}_{\ensuremath{i}}}=\ensuremath{\langle \ensuremath{\mathcal{E}}^i \rangle}$ and $ \ensuremath{\ensuremath{\mathitbf{V_i}}}=\ensuremath{\langle \ensuremath{\ensuremath{\mathitbf{v}}}\ensuremath{\mathcal{E}}^i \rangle}.$ In the case of the six moments model, $ i$ is defined in the range $ i{\;}\in[0,2]$ . A detailed derivation and discussion of these models follows in the next section. An important objective here is to point out the model limitations.


1.4.3.1 Drift-Diffusion Transport Model

Inserting the zeroth moment into equation (1.76) yields the particle balance equation

$\displaystyle \underbrace{\ensuremath{\ensuremath{\partial_{t} \ensuremath{\lan...
...\nabla_{\!p}}}}}\ensuremath{\tilde{\varphi}}}_{\mbox{(5)}}=-\ensuremath{R}{\;}.$ (1.86)

Since $ \ensuremath{\mathcal{E}}^0$ depends neither on $ \ensuremath{\ensuremath{\mathitbf{r}}} $ or $ \ensuremath{\ensuremath{\mathitbf{p}}}$ , one can omit the third, the fourth, and the fifth term of equation (1.86) to obtain

$\displaystyle \ensuremath{\ensuremath{\partial_{t} \left(\ensuremath{n}\ensurem...
...}\ensuremath{\ensuremath{\mathitbf{V_\mathrm{0}}}}}\right)=-\ensuremath{R}{\;}.$ (1.87)

Inserting the first moment  $ \ensuremath{\ensuremath{\mathitbf{p}}}\ensuremath{\mathcal{E}}^0$ into equation (1.77), the particle flux is obtained. The time derivation terms of the fluxes are neglected, since the relaxation time is in the order of picoseconds, which ensures quasi-stationary behavior even for today's fastest signals [107,20].

$\displaystyle \underbrace{\ensuremath{\ensuremath{\ensuremath{\mathitbf{\nabla_...
...suremath{\mathcal{E}}^0 \rangle \! \rangle}}{\ensuremath{\tau_\mathrm{0}}}{\;},$ (1.88)

where $ \ensuremath {\tau _\mathrm {0}}$ is the momentum relaxation time. Due to the assumption of an isotropic band structure and in the diffusion limit, the non-diagonal elements of the tensors of equation (1.88) vanish. Hence, the tensor of the first part (1) of equation (1.88) can be approximated as the trace divided by the dimension factors of the system. Multiplying (1) with tensorial non-parabolicity factors $ H_{\ensuremath{i}}$ , one obtains

$\displaystyle \ensuremath{\ensuremath{\ensuremath{\mathitbf{\nabla_{\!r}}}}}\en...
...}}_{\mathrm{1}}})%+\prefac\n\nrgwone\He\Gradr\left(\frac{1}{\malt}\right)\ws ,
$ (1.89)

with  $ \ensuremath{A}$ as a dimension factor. $ \ensuremath{A}$ can be calculated considering the dimension of the system and the prefactors of the average energy assuming a parabolic bandstructure and a Maxwell distribution function. For instance, in the case of the three-dimensional electron gas the value of  $ \ensuremath{A}$ can be derived as

$\displaystyle \ensuremath{\langle \! \langle \ensuremath{\ensuremath{\mathitbf{...
...{3}{2}\ensuremath{n}k_\ensuremath{\mathrm{B}}\ensuremath{T_\ensuremath{n}}{\;}.$ (1.90)

$ \ensuremath{A}$ is equal to $ 2/3$ . The average energy has been considered according to equation (1.85). For the one and two-dimensional electron gas the values are $ \ensuremath{A_{\mathrm{1}}}=2$ and $ \ensuremath{A_{\mathrm{2}}}=1$ , respectively. In the sequel, the non-parabolic factors will be shown using Subband Monte Carlo (SMC) data for the two-dimensional electron gas. $ \ensuremath{m^*_{\ensuremath{n},\ensuremath{p}}}$ represents the effective masses for electrons and holes respectively.

Based on the second assumption that the kinetic energy can be expressed using a product ansatz

$\displaystyle \ensuremath{\mathcal{E}}=\ensuremath{\nu}\ensuremath{\kappa(\ensuremath{\ensuremath{\mathitbf{k}}})}{\;},$ (1.91)

term (2) and (3) of (1.88) vanish. The fourth term of (1.88) can be written as

$\displaystyle \ensuremath{\mathrm{s}_{\alpha}}\mathrm{q}\ensuremath{\langle \! ...
...uremath{\ensuremath{\mathitbf{\nabla_{\!r}}}}}\ensuremath{\tilde{\varphi}}{\;}.$ (1.92)

Putting all terms together, the particle flux equation yields

$\displaystyle \ensuremath{n}\ensuremath{\ensuremath{\mathitbf{V_\mathrm{0}}}}=-...
...uremath{\ensuremath{\mathitbf{\nabla_{\!r}}}}}\ensuremath{\tilde{\varphi}}{\;}.$ (1.93)

There, the carrier mobility $ \ensuremath {\ensuremath {\mu }_{\mathrm {0}}}$ is defined as $ \ensuremath{\ensuremath{\mu}_{\mathrm{0}}}=\mathrm{q}\ensuremath{\tau_\mathrm{0}}/\ensuremath{m^*_{\ensuremath{n},\ensuremath{p}}}$ .
Together with Poisson's equation, the drift-diffusion (DD) model can be formulated as

$\displaystyle \ensuremath{\ensuremath{\partial_{t} \left(\ensuremath{n}\ensurem...
...}\ensuremath{\ensuremath{\mathitbf{V_\mathrm{0}}}}}\right)=-\ensuremath{R}{\;},$ (1.94)

$\displaystyle \ensuremath{n}\ensuremath{\ensuremath{\mathitbf{V_\mathrm{0}}}}=-...
...uremath{\ensuremath{\mathitbf{\nabla_{\!r}}}}}\ensuremath{\tilde{\varphi}}{\;}.$ (1.95)

As the closure relation, the local thermal equilibrium approximation has been assumed. The local thermal equilibrium approximation sets the carrier temperatures $ \ensuremath {T_\ensuremath {n}}$ equal to the lattice temperature $ \ensuremath{T_{\mathrm{L}}}$ . Furthermore, with the assumption of a cold Maxwell distribution function, the highest moment $ \ensuremath{\ensuremath{\ensuremath{w}}_{\mathrm{1}}}$ can be expressed as

$\displaystyle \ensuremath{\ensuremath{\ensuremath{w}}_{\mathrm{1}}}^{\mathrm{1D}}=\frac{1}{2}k_\ensuremath{\mathrm{B}}\ensuremath{T_{\mathrm{L}}},$    $\displaystyle  \ensuremath{\ensuremath{\ensuremath{w}}_{\mathrm{1}}}^{\mathrm{2D}}=k_\ensuremath{\mathrm{B}}\ensuremath{T_{\mathrm{L}}}$, and  $\displaystyle  \ensuremath{\ensuremath{\ensuremath{w}}_{\mathrm{1}}}^{\mathrm{3D}}=\frac{3}{2}k_\ensuremath{\mathrm{B}}\ensuremath{T_{\mathrm{L}}}{\;}.$ (1.96)

Due to the diffusion approximation the drift term of the average carrier energy is neglected.

1.4.3.2 Energy Transport Model

The Energy Transport (ET) model can be derived by inserting the first four moments $ \ensuremath{\mathcal{E}}^i$ and $ \ensuremath{\ensuremath{\mathitbf{p}}}\ensuremath{\mathcal{E}}^i$ with $ i\in [0,1]$ into equation (1.76) and (1.77), respectively. The energy balance equation can be obtained by the second moment  $ \ensuremath{\mathcal{E}}$

$\displaystyle \ensuremath{\ensuremath{\partial_{t} \ensuremath{\langle \! \lang...
...\mathcal{E}}_\mathrm{0} \rangle \! \rangle}}{\ensuremath{\tau_\mathrm{1}}}{\;}.$ (1.97)

After a reformulation, equation (1.97) yields

$\displaystyle \ensuremath{\ensuremath{\partial_{t} \left(\ensuremath{n}\ensurem...
...h{\ensuremath{w}}_{\mathrm{10}}}}{\ensuremath{\tau_\mathrm{1}}}=\mathrm{0}{\;}.$ (1.98)

$ \ensuremath{\ensuremath{\ensuremath{w}}_{\mathrm{10}}}$ is the equilibrium case of $ \ensuremath{\ensuremath{\ensuremath{w}}_{\mathrm{1}}}$ , whereas $ \ensuremath{\ensuremath{\mathitbf{V_\mathrm{1}}}}$ is the energy flux, the next higher moment. $ \ensuremath {\tau _\mathrm {1}}$ is known as the energy relaxation time. The energy flux can be derived inserting the third moment $ \ensuremath{\ensuremath{\mathitbf{v}}}\ensuremath{\mathcal{E}}$ into equation (1.77)

$\displaystyle \underbrace{\ensuremath{\ensuremath{\ensuremath{\mathitbf{\nabla_...
...ensuremath{\mathcal{E}} \rangle \! \rangle}}{\ensuremath{\tau_\mathrm{3}}}{\;}.$ (1.99)

The first term on the left side of equation (1.99) can be expressed as

$\displaystyle \ensuremath{\ensuremath{\ensuremath{\mathitbf{\nabla_{\!r}}}}}\en...
...\ensuremath{n}\ensuremath{\ensuremath{\ensuremath{w}}_{\mathrm{2}}}\right){\;}.$ (1.100)

Using the tensorial identity $ \ensuremath{\ensuremath{\ensuremath{\mathitbf{\nabla_{\!x}}}}}\ensuremath{\oti...
...}}}}\ensuremath{\mathrm{h}}\left(\ensuremath{\ensuremath{\mathitbf{x}}} \right)$ , the second term can be rewritten as

$\displaystyle \ensuremath{\langle \! \langle \ensuremath{\ensuremath{\mathitbf{...
...thitbf{\nabla_{\!r}}}}}\ensuremath{\mathcal{E}}\right) \rangle \! \rangle}{\;},$ (1.101)

and the third term as

$\displaystyle \ensuremath{\langle \! \langle \ensuremath{\ensuremath{\ensuremat...
...{\otimes}\ensuremath{\ensuremath{\mathitbf{v}}}\right) \rangle \! \rangle}{\;}.$ (1.102)

Combining equations (1.101) with (1.102) cancels each other. The fourth term on the left side of (1.99) can be approximated again with the above tensorial identity used in (1.100) as

$\displaystyle \ensuremath{\mathrm{s}_{\alpha}}\mathrm{q}\ensuremath{\langle \en...
...remath{\ensuremath{\ensuremath{\mathitbf{\nabla}}}}\ensuremath{\tilde{\varphi}}$ (1.103)
$\displaystyle =\ensuremath{\mathrm{s}_{\alpha}}\mathrm{q}\ensuremath{n}\ensurem...
...h{\ensuremath{\ensuremath{\mathitbf{\nabla}}}}\ensuremath{\tilde{\varphi}}{\;}.$ (1.104)

Collecting all terms together yields the energy flux

$\displaystyle \ensuremath{n}\ensuremath{\ensuremath{\mathitbf{V_\mathrm{1}}}}=-...
...uremath{\ensuremath{\mathitbf{\nabla_{\!r}}}}}\ensuremath{\tilde{\varphi}}{\;}.$ (1.105)

The energy flux mobility $ \ensuremath {\ensuremath {\mu }_{\mathrm {1}}}$ is defined as $ \ensuremath{\ensuremath{\mu}_{\mathrm{1}}}=\mathrm{q}\ensuremath{\tau_\mathrm{3}}/\ensuremath{m^*_{\ensuremath{n},\ensuremath{p}}}$ . Summarizing the derivation of the energy balance and the energy flux equation the ET transport model yields

$\displaystyle \ensuremath{\ensuremath{\partial_{t} \left(\ensuremath{n}\ensurem...
...}\ensuremath{\ensuremath{\mathitbf{V_\mathrm{0}}}}}\right)=-\ensuremath{R}{\;},$ (1.106)

$\displaystyle \ensuremath{n}\ensuremath{\ensuremath{\mathitbf{V_\mathrm{0}}}}=-...
...uremath{\ensuremath{\mathitbf{\nabla_{\!r}}}}}\ensuremath{\tilde{\varphi}}{\;},$ (1.107)

$\displaystyle \ensuremath{\ensuremath{\partial_{t} \left(\ensuremath{n}\ensurem...
...h{\ensuremath{w}}_{\mathrm{10}}}}{\ensuremath{\tau_\mathrm{1}}}=\mathrm{0}{\;},$ (1.108)

$\displaystyle \ensuremath{n}\ensuremath{\ensuremath{\mathitbf{V_\mathrm{1}}}}=-...
...uremath{\ensuremath{\mathitbf{\nabla_{\!r}}}}}\ensuremath{\tilde{\varphi}}{\;}.$ (1.109)

In order to close the system, a heated Maxwellian is assumed. The highest moment $ \ensuremath{\ensuremath{\ensuremath{w}}_{\mathrm{2}}}$ for the one, two, and three-dimensional electron gas, respectively, can be written as

$\displaystyle \ensuremath{\ensuremath{\ensuremath{w}}_{\mathrm{2}}}^{\mathrm{1D...
...rac{3}{4}\left(k_\ensuremath{\mathrm{B}}\ensuremath{T_\ensuremath{n}}\right)^2,$ $\displaystyle  \ensuremath{\ensuremath{\ensuremath{w}}_{\mathrm{2}}}^{\mathrm{2D}}=2\left(k_\ensuremath{\mathrm{B}}\ensuremath{T_\ensuremath{n}}\right)^2$, and  $\displaystyle  \ensuremath{\ensuremath{\ensuremath{w}}_{\mathrm{2}}}^{\mathrm{...
...5}{4}\left(k_\ensuremath{\mathrm{B}}\ensuremath{T_\ensuremath{n}}\right)^2{\;}.$ (1.110)

Note that due to the diffusion approximation convective terms of the form $ \ensuremath{\langle \ensuremath{\ensuremath{\mathitbf{k}}} \rangle}\ensuremath{\otimes}\ensuremath{\langle \ensuremath{\ensuremath{\mathitbf{k}}} \rangle}$ and $ \ensuremath{\langle \ensuremath{\ensuremath{\mathitbf{k}}} \rangle}\cdot\ensuremath{\langle \ensuremath{\ensuremath{\mathitbf{k}}} \rangle}$ are neglected against terms of the form $ \ensuremath{\langle \ensuremath{\ensuremath{\mathitbf{k}}}\ensuremath{\otimes}\ensuremath{\ensuremath{\mathitbf{k}}} \rangle}$ and $ \ensuremath{\langle \ensuremath{\ensuremath{\mathitbf{k}}}\cdot\ensuremath{\ensuremath{\mathitbf{k}}} \rangle}$ . The consequence is that only the thermal energy  $ k_\ensuremath{\mathrm{B}}\ensuremath{T_\ensuremath{n}}$ is considered, whereas the drift energy term of the carrier energy is neglected.


1.4.3.3 Six Moments Transport Model

Adding the two next higher moments to the hydrodynamic transport model, the six moments (SM) transport model can be derived. Using the fourth moment $ \ensuremath{\mathcal{E}}^2$ in equation (1.76), the second-order energy balance equation is expressed as

$\displaystyle \ensuremath{\ensuremath{\partial_{t} \ensuremath{\langle \! \lang...
...athcal{E}}_\mathrm{0}^2 \rangle \! \rangle}}{\ensuremath{\tau_\mathrm{2}}}{\;}.$ (1.111)

With $ \ensuremath{\ensuremath{\ensuremath{\mathitbf{\nabla_{\!r}}}}}\ensuremath{\mat...
...math{\ensuremath{\ensuremath{\mathitbf{\nabla_{\!r}}}}}\ensuremath{\mathcal{E}}$ , the second-order energy balance equation can be formulated as

$\displaystyle \ensuremath{\ensuremath{\partial_{t} \left(\ensuremath{n}\ensurem...
...ensuremath{\ensuremath{w}}_{\mathrm{20}}}}{\ensuremath{\tau_\mathrm{2}}}=0{\;}.$ (1.112)

The second-order energy flux equation can be obtained inserting the sixth moment $ \ensuremath{\ensuremath{\mathitbf{v}}}\ensuremath{\mathcal{E}}^2$ into equation (1.77)

$\displaystyle \underbrace{\ensuremath{\ensuremath{\ensuremath{\mathitbf{\nabla_...
...th{\mathitbf{p}}}\ensuremath{\mathcal{E}}^2}{\ensuremath{\tau_\mathrm{4}}}{\;}.$ (1.113)

Each term on the left hand side of equation (1.113) is derived as in the case of the energy flux equation. The first term yields

$\displaystyle \ensuremath{\ensuremath{\ensuremath{\mathitbf{\nabla_{\!r}}}}}\en...
...ensuremath{n}\ensuremath{\ensuremath{\ensuremath{w}}_{\mathrm{3}}}}\right){\;},$ (1.114)

while the second and the third term together can be neglected. The fourth term on the left-hand side of equation (1.113) yields

$\displaystyle \ensuremath{\mathrm{s}_{\alpha}}\mathrm{q}\ensuremath{\langle \! ...
...uremath{\ensuremath{\mathitbf{\nabla_{\!r}}}}}\ensuremath{\tilde{\varphi}}{\;}.$ (1.115)

Summarizing all contributions, the second-order energy flux can be written as

$\displaystyle \ensuremath{n}\ensuremath{\ensuremath{\mathitbf{V_\mathrm{2}}}}=-...
...h{\ensuremath{\ensuremath{\mathitbf{\nabla}}}}\ensuremath{\tilde{\varphi}}{\;}.$ (1.116)

The second-order energy flux mobility is defined as $ \mathrm{q}\ensuremath{\tau_\mathrm{4}}/\ensuremath{m^*_{\ensuremath{n},\ensuremath{p}}}$ . The SM transport model can be now written as

$\displaystyle \ensuremath{\ensuremath{\partial_{t} \left(\ensuremath{n}\ensurem...
...}\ensuremath{\ensuremath{\mathitbf{V_\mathrm{0}}}}}\right)=-\ensuremath{R}{\;},$ (1.117)

$\displaystyle \ensuremath{n}\ensuremath{\ensuremath{\mathitbf{V_\mathrm{0}}}}=-...
...uremath{\ensuremath{\mathitbf{\nabla_{\!r}}}}}\ensuremath{\tilde{\varphi}}{\;},$ (1.118)

$\displaystyle \ensuremath{\ensuremath{\partial_{t} \left(\ensuremath{n}\ensurem...
...h{\ensuremath{w}}_{\mathrm{10}}}}{\ensuremath{\tau_\mathrm{1}}}=\mathrm{0}{\;},$ (1.119)

$\displaystyle \ensuremath{n}\ensuremath{\ensuremath{\mathitbf{V_\mathrm{1}}}}=-...
...uremath{\ensuremath{\mathitbf{\nabla_{\!r}}}}}\ensuremath{\tilde{\varphi}}{\;},$ (1.120)

$\displaystyle \ensuremath{\ensuremath{\partial_{t} \left(\ensuremath{n}\ensurem...
...ensuremath{\ensuremath{w}}_{\mathrm{20}}}}{\ensuremath{\tau_\mathrm{2}}}=0{\;},$ (1.121)

$\displaystyle \ensuremath{n}\ensuremath{\ensuremath{\mathitbf{V_\mathrm{2}}}}=-...
...h{\ensuremath{\ensuremath{\mathitbf{\nabla}}}}\ensuremath{\tilde{\varphi}}{\;}.$ (1.122)

In order to close the six moments model, the kurtosis, which is the deviation of the current distribution function from a heated Maxwellian, is introduced. For the one, two, and three-dimensional electron gas the kurtosis is defined as

$\displaystyle \ensuremath{\beta}^{\mathrm{1D}}=\frac{1}{3}\frac{\ensuremath{\en...
...th{w}}_{\mathrm{2}}}}{\ensuremath{\ensuremath{\ensuremath{w}}_{\mathrm{1}}}^2},$ $\displaystyle  \ensuremath{\beta}^{\mathrm{2D}}=\frac{1}{2}\frac{\ensuremath{\...
...ath{w}}_{\mathrm{2}}}}{\ensuremath{\ensuremath{\ensuremath{w}}_{\mathrm{1}}}^2}$, and  $\displaystyle  \ensuremath{\beta}^{\mathrm{3D}}=\frac{3}{5}\frac{\ensuremath{\...
...}}_{\mathrm{2}}}}{\ensuremath{\ensuremath{\ensuremath{w}}_{\mathrm{1}}}^2}{\;}.$ (1.123)

The factors $ 1/3$ , $ 1/2$ , and $ 3/5$ in the 1D, 2D, and 3D case are normalization factors, respectively. For parabolic bands and a heated Maxwellian the kurtosis equals unity. In realistic devices the kurtosis is in the range  $ [\mathrm{0.75},\mathrm{3}]$ , which indicates a strong deviation from a heated Maxwellian. This is visualized in Fig. 1.12.
Figure 1.12: Kurtosis for a $ \mathrm {100 nm}$ $ \mathrm {n}^+\mathrm {n}\mathrm {n}^+$ structure calculated with the MC method. In the channel the kurtosis is lower than one, which means that the heated Maxwellian overestimates the carrier distribution function, while the Maxwellian underestimates the carrier distribution in the drain.
\includegraphics[width=0.5\textwidth]{rot_figures_left/simulation/transport/MCdata/Si/kurtosis_nice.eps}

Here, the kurtosis of an $ \mathrm {n}^+\mathrm {n}\mathrm {n}^+$ structure calculated with the 3D MC approach is shown. A driving field of $ {\mathrm{100}}{\;}{\mathrm{kV/cm}}$ in the middle of the channel has been applied. The kurtosis is equal to unity at the beginning of the device, which means that a heated Maxwellian is a good approximation for the carrier distribution function. In the channel the kurtosis is below unity. Therefore, the Maxwellian overestimates the carrier distribution function. In the drain region the carrier distribution function overestimates the Maxwellian. A detailed discussion about this deviation of the carrier distribution function from a Maxwellian is given in the next chapter. The closure relation for the six moments model can be finally written as

$\displaystyle \ensuremath{\ensuremath{\ensuremath{w}}_{\mathrm{3}}}^{\mathrm{1D...
...hrm{B}}\ensuremath{T_\ensuremath{n}}\right)^3\ensuremath{\beta}^\ensuremath{c},$ $\displaystyle  \ensuremath{\ensuremath{\ensuremath{w}}_{\mathrm{3}}}^{\mathrm{...
...thrm{B}}\ensuremath{T_\ensuremath{n}}\right)^3\ensuremath{\beta}^\ensuremath{c}$, and  $\displaystyle  \ensuremath{\ensuremath{\ensuremath{w}}_{\mathrm{3}}}^{\mathrm{...
...B}}\ensuremath{T_\ensuremath{n}}\right)^3\ensuremath{\beta}^\ensuremath{c}{\;}.$ (1.124)

$ \ensuremath {c}$ is a fit factor and it has been previously demonstrated [108,109] that a value of $ 2.7$ delivers good results for  $ \ensuremath{\ensuremath{\ensuremath{w}}_{\mathrm{3}}}$ in the source and in the channel regions. This is visible in the left part of Fig. 1.13. Here, the ratio between the sixth moment calculated with MC simulation and the analytical equations (1.124) for different  $ \ensuremath {c}$ in a $ \mathrm {n}^+\mathrm {n}\mathrm {n}^+$ structure is shown.
Figure 1.13: Ratio between the sixth moment obtained from three-dimensional bulk MC simulation and the analytical closure relation (1.124) of the six moments model for different values of  $ \ensuremath {c}$ (see left part). The maximum peak at point B of the ratio as a function of the lattice temperature is shown on the right.
\includegraphics[width=0.5\textwidth]{rot_figures_left/simulation/transport/nin/closure_M6EEE.eps} \includegraphics[width=0.5\textwidth]{rot_figures_left/simulation/transport/MCdata/monju/kur_max.eps}
Figure 1.14: Distribution function at point B for lattice temperatures of $ \mathrm {200 K}$ , $ \mathrm {300 K}$ , and $ \mathrm {400 K}$ . The high energy tail of the carrier distribution function decreases for high lattice temperatures.
\includegraphics[width=0.5\textwidth]{rot_figures_left/simulation/transport/MCdata/monju/distri_TL.eps}
Figure 1.15: The ratio of the six moments model obtained from two-dimensional Subband Monte Carlo data with the analytical 2D closure relation of the six moments model for different  $ \ensuremath {c}$ is presented. As can be observed is for the 2D case as well the best value.
\includegraphics[width=0.5\textwidth]{rot_figures_left/simulation/transport/closure_2D/2D_closure.eps}
As can be observed, a value of $ 2.7$ provides the best result in the source and in the channel region, while the value $ 3.0$ of  $ \ensuremath {c}$ gives better results at the beginning of the drain region. Due to the better modeling of $ \ensuremath{\ensuremath{\ensuremath{w}}_{\mathrm{3}}}$ with $ 2.7$ in the source and in the channel region compared to $ \ensuremath{c}=3.0$ , $ 2.7$ is the exponent of choice.

On the right side of Fig. 1.13 the maximum peak of the ratio in point B (see the left part of Fig. 1.13) is shown as a function of the lattice temperature  $ \ensuremath{T_{\mathrm{L}}}$ . The maximum peak decreases, which means that the closure relation of the six moments model with  $ \ensuremath{c}=2.7$ is improved, especially in the drain region. The origin of this improvement for increasing  $ \ensuremath{T_{\mathrm{L}}}$ is a decrease of the high energy tail of the distribution function as pointed out in Fig. 1.14.

The ratio between the sixth moment and the 2D analytical expression from equation (1.124) as a function of the lower order moments from subband MC simulations through a SOI MOSFET with a channel length of $ {\mathrm{100}}{\;}{\mathrm{nm}}$ has been calculated in Fig. 1.15. As demonstrated in the 2D system the value $ 2.7$ provides as well the best result.

All three non-parabolic factors are visualized in Fig. 1.16 using Subband Monte Carlo data. For low energies, the parabolic band approximation is valid, whereas for high-fields the non-parabolicity of the band structure must be taken into account.

Figure 1.16: $ H_{1}\mathrm {,}$ $ H_{2}\mathrm {,}$ and $ H_{3}$ as functions of the energy with an effective field of 950 kV/cm. For low energies, the non-parabolicity factors approach unity. The non-parabolicity factors have been calculated out of Subband Monte Carlo simulations.
\includegraphics[width=0.5\textwidth]{rot_figures_left/simulation/transport/nonparafac_color.eps}
Finally, the derived macroscopic transport models can be generalized into one balance equation

$\displaystyle \ensuremath{\ensuremath{\partial_{t} \left(\ensuremath{n}\ensurem...
...h{\ensuremath{\ensuremath{w}}_{\ensuremath{i0}}}}{\ensuremath{{\tau_i}}}=0{\;},$ (1.125)

and one flux equation

$\displaystyle \ensuremath{n}\ensuremath{\ensuremath{\mathitbf{V_i}}}=-\ensurema...
...uremath{\ensuremath{\mathitbf{\nabla_{\!r}}}}}\ensuremath{\tilde{\varphi}}{\;}.$ (1.126)

For instance, in the case of the six moments model the index $ i$ is valid in the range $ i{\;}\in[0,2]$ due to the incorporation of each three conservation and flux equations.

1.4.3.4 Transport Parameter Modeling

It is challenging to model transport parameters as the mobilities $ \ensuremath {\ensuremath {\mu }_{\mathrm {0}}}$ , $ \ensuremath {\ensuremath {\mu }_{\mathrm {1}}}$ , $ \ensuremath {\ensuremath {\mu }_{\mathrm {2}}}$ , and the relaxation times $ \ensuremath {\tau _\mathrm {1}}$ and $ \ensuremath {\tau _\mathrm {2}}$ , since they all depend on the actual shape of the distribution function, on the scattering rates, and on the band structure. They therefore contain information on hot-carriers and non-parabolicity effects. Theoretical models for a characterization of these parameters are often very complicated and simplified results are unsatisfying. For engineering purposes empirical models are often a better choice. A common assumption is that the effective carrier mobility is written as

$\displaystyle \ensuremath{\ensuremath{\mu}_{\mathrm{0}}}^{\mathrm{LISF}}=\ensur...
...nsuremath{\ensuremath{\mu}_{\mathrm{0}}}^{\mathrm{L}}\right)\right)\right){\;},$ (1.127)

with $ \ensuremath{\ensuremath{\mu}_{\mathrm{0}}}^{\mathrm{LISF}}$ as the mobility influenced by lattice scattering (index $ \mathrm{L}$ ), ionized impurity scattering (index $ \mathrm{I}$ ), surface scattering (index $ \mathrm{S}$ ), and carrier heating (index $ \mathrm{F}$ ). A very simple model to describe $ \ensuremath{\ensuremath{\mu}_{\mathrm{0}}}^{\mathrm{L}}$ is a power law ansatz. Empirical models for the characterization of the impact of ionized impurity and surface scattering on $ \ensuremath {\ensuremath {\mu }_{\mathrm {0}}}$ can be found in [110] and [111], respectively.

The carrier mobility in the empirical mobility models are characterized by the electric field. But the mobilities depend on the distribution function and hence on the carrier energy rather than on the electric field as (for a parabolic band structure)

$\displaystyle \ensuremath{\ensuremath{\mu}_{\mathrm{0}}}=\frac{\mathrm{q}\ensur...
...\mathrm{0}}\left({\ensuremath{\ensuremath{w}}}\right)}{\ensuremath{{m^*}}}{\;},$ (1.128)

$\displaystyle \ensuremath{\ensuremath{\mu}_{\mathrm{1}}}=\frac{\mathrm{q}\ensur...
...\mathrm{3}}\left({\ensuremath{\ensuremath{w}}}\right)}{\ensuremath{{m^*}}}{\;},$ (1.129)

$\displaystyle \ensuremath{\ensuremath{\mu}_{\mathrm{2}}}=\frac{\mathrm{q}\ensur...
...\mathrm{5}}\left({\ensuremath{\ensuremath{w}}}\right)}{\ensuremath{{m^*}}}{\;}.$ (1.130)

However, these simple empirical transport models did not deliver satisfactory results especially in the high field regime. Furthermore, a consistent comparison with other methods as for instance Monte Carlo simulations is difficult, because the transport model does not reproduce the Monte Carlo results in the homogeneous case [21].

In [112] a transport parameter model based on homogeneous fullband Monte Carlo tables has been introduced. Here, all higher-order transport parameters are extracted for different doping concentrations and for different driving forces. The transport parameters are then considered in the macroscopic transport models as a function of the average energy. Since all transport parameters are obtained from Monte Carlo simulations the transport models are free of fit parameters. Macroscopic models based on Monte Carlo data improves its counterpart models based on empirical models significantly, both in terms of numerical stability and in the agreement with Monte Carlo device simulations, as will be demonstrated in the next chapter.


1.4.4 Monte Carlo Method

The Monte Carlo MC method is a statistical approach to solve the BTE equation [113,114,115,116,117]. The procedure does not aim at solving differential equations as described above, but to observe the trajectories of carriers as they move through a device under the influence of a driving field and scattering forces [1]. The method is illustrated in Fig. 1.17. The momentum of a particle is set as an initial condition. Pseudo-random numbers define the time of free flight of the particles as well as the scattering events. The simulation converges, when the statistical error of the quantities is under a certain limit. During convergence transport parameters like the carrier mobility or higher-order transport parameters can be extracted.

An advantage of this approach is that all kind of scattering mechanisms including for instance optical, acoustical phonon scattering and a general band structure can be modeled very precisely. Thus, the method is often used as a benchmark for computationally less expensive transport models. The disadvantage of this statistical approach is that the error bound of a quantity goes with  $ 1/\sqrt{\ensuremath{N}}$ , with $ N$ describing the number of random events. Therefore, with the increased accuracy of the factor $ \chi$ , the number of scattering events increases with $ \ensuremath{\chi}^2$  [118]. The consequence is that the simulation time increases as well. Therefore, for engineering applications, macroscopic transport models based on the method of moments, which are computationally much less expensive, are more suitable than the MC technique.

Figure 1.17: Flowchart of a MC Simulation (after [1])
\includegraphics[width=0.5\textwidth]{figures/svg/MonteCarlo.eps}

1.4.4.1 Free Flight - Self Scattering

As displayed in Fig. 1.17, the free flight is an important part of the MC simulation. The coordinates of a moving particle in phase space with an applied electric field in $ x$ can be written as

$\displaystyle \ensuremath{p_x}\left(\ensuremath{{t}}\right)=\ensuremath{p_x}\left(0\right)-\mathrm{q}\ensuremath{E}_\ensuremath{x}\ensuremath{{t}}{\;},$    
$\displaystyle \ensuremath{p_y}\left(\ensuremath{{t}}\right)=\ensuremath{p_y}(0){\;},$ (1.131)
$\displaystyle \ensuremath{p_z}\left(\ensuremath{{t}}\right)=\ensuremath{p_z}(0){\;},$    
$\displaystyle \ensuremath{x}\left(\ensuremath{{t}}\right)=\ensuremath{x}\left(0...
...t}}\right)-\ensuremath{\mathcal{E}}\left(0\right)}{\mathrm{q}\ensuremath{E}_x},$    
$\displaystyle \ensuremath{y}\left(\ensuremath{{t}}\right)=\ensuremath{y}(0)+\fr...
...th{p_y}}{\ensuremath{m^*_{\ensuremath{n},\ensuremath{p}}}}\ensuremath{{t}}{\;},$ (1.132)
$\displaystyle \ensuremath{z}\left(\ensuremath{{t}}\right)=\ensuremath{z}(0)+\fr...
...th{p_z}}{\ensuremath{m^*_{\ensuremath{n},\ensuremath{p}}}}\ensuremath{{t}}{\;}.$    

$ \mathcal{E}$ denotes the conduction band edge, whereas $ m^*_{\ensuremath{n},\ensuremath{p}}$ is the effective mass of the electrons and holes, respectively. The scattering rates $ \ensuremath{\Gamma}$ of the carriers are indirectly proportional to the duration of the free flight [22]

$\displaystyle \ensuremath{\Gamma}\left(\ensuremath{\mathcal{E}}\right)=\ensurem...
...{\ensuremath{{\tau_i}}\left(\ensuremath{\ensuremath{\mathitbf{p}}}\right)}{\;}.$ (1.133)

The sum includes all scattering mechanisms $ \ensuremath{k}$ , where the summation of $ \ensuremath{{\tau_i}}$ denotes the free flight. The probability of the carrier movement not being affected by the first collision between $ \ensuremath{{t}}$ and $ \ensuremath{{t}}+d\ensuremath{{t}}$ is the scattering rate times the probability that it survives until the time $ \ensuremath{{t}}$  [119]. For a random number generator that produces uniformly distributed numbers between $ \ensuremath{u}{\;}\in[0,1]$ , the collision times can be expressed as [120]

$\displaystyle \ensuremath{{t}}_{\mathrm{coll}}=-\frac{1}{\ensuremath{\Gamma}_{\mathrm{const}}}\ln\left(1-\ensuremath{u}\right){\;}.$ (1.134)

$ \ensuremath{\Gamma}_{\mathrm{const}}$ denotes a constant scattering rate and $ \ensuremath{u}$ are random numbers. Equation (1.134) is a very simple description of the free flight. In real semiconductor devices the scattering rates are energy dependent. Hence, equation (1.134) must be rewritten to acknowledge this fact. This can be done by introducing the so called self-scattering $ \ensuremath{\Gamma}_{\mathrm{self}}\left(\ensuremath{\mathcal{E}}\right)$ . The $ \ensuremath{\Gamma}_{\mathrm{self}}$ is defined as

$\displaystyle \ensuremath{\Gamma}_{\mathrm{self}}\left(\ensuremath{\mathcal{E}}...
...remath{\Gamma}_0 -\ensuremath{\Gamma}\left(\ensuremath{\mathcal{E}}\right){\;}.$ (1.135)

Note that $ \ensuremath{\Gamma}_{\mathrm{self}}$ is energy dependent. Finally, the total scattering rate ( $ \ensuremath{\Gamma}_{\mathrm{self}}\left(\ensuremath{\mathcal{E}}\right)+\ensuremath{\Gamma}\left(\ensuremath{\mathcal{E}}\right)$ ) remains constant, so the simple equation (1.134) can be applied. It is important that the self-scattering does not influence the trajectories. Hence, the carriers momentum of a self-scattering event must be unchanged.

1.4.4.2 Scattering Events

Equations (1.131) and (1.132) determine the free flight of the carriers in an electric field. The Monte Carlo method defines the correct scattering events as follows: After the free flight a new scattering event can be obtained in the range

$\displaystyle \ensuremath{\Gamma}_{\mathrm{before}}^{\mathrm{Total}}\le\ensuremath{u}< \ensuremath{\Gamma}^{\mathrm{Total}}{\;}.$ (1.136)

$ \ensuremath{\Gamma}_{\mathrm{before}}$ characterizes the total scattering rate according to equation (1.133) divided by the self-scattering rate before the scattering event. $ \ensuremath{\Gamma}^{\mathrm{Total}}$ is defined via (1.133) divided by the self-scattering rate, whereas $ \ensuremath{u}$ is a uniform random number between $ \mathrm{0}$ and $ \mathrm{1}$ . The relaxation times can be calculated via Fermi's golden rule [1]

$\displaystyle \ensuremath{S(\ensuremath{\ensuremath{\mathitbf{p}}},\ensuremath{...
...th{\ensuremath{\mathitbf{p}}}\right)-\Delta\ensuremath{\mathcal{E}}\right){\;},$ (1.137)

$\displaystyle \frac{1}{\ensuremath{\tau}\left(\ensuremath{\ensuremath{\mathitbf...
...}\left(1-\ensuremath{f}(\ensuremath{\ensuremath{\mathitbf{p}}}^{'})\right){\;}.$ (1.138)

(1.138) is the rate at which carriers with a specific momentum  $ \ensuremath{\ensuremath{\mathitbf{p}}}_\mathrm{0}$ and spin up scatter to any other state. In the sequel, the scattering at the interface based on (1.137) and (1.138) will be discussed, which is an important inversion layer effect.

As pointed out in [22], the scattering events calculated with the random procedure explained above are in very good agreement with the experiment.

1.4.4.3 Fundamental Statistics

The Monte Carlo method allows to evaluate averages of microscopic quantities as defined in equation (1.69). Assuming an ensemble of $ \ensuremath{N}$ independent and identical particles, the estimation of an expected value based on an ensemble average can be expressed as [112,121]

$\displaystyle \ensuremath{\langle \ensuremath{x} \rangle}=\frac{1}{\ensuremath{...
...suremath{X}\left(\ensuremath{\gamma}_j\left(\ensuremath{{t}}\right)\right){\;},$ (1.139)

where  $ \ensuremath{\langle \ensuremath{x} \rangle}$ is the macroscopic quantity and $ \ensuremath{X}$ is the corresponding microscopic counterpart. $ \ensuremath{w}$ is the statistical weight and $ \ensuremath{\gamma}_j$ denotes the state of the jth carrier. The weight $ \ensuremath{w}$ is defined as

$\displaystyle 0\le\ensuremath{w}_j\le\ensuremath{N}  $ and $\displaystyle   \ensuremath{\sum_{j=1}^{\ensuremath{N}}}\ensuremath{w}_j=\ensuremath{N}{\;}.$ (1.140)

The Monte Carlo simulator used in this work, the so called before-scattering method is used [122]. Here, the averages are calculated at the end of the free flight. Hence, equation (1.139) is used right before the next scattering event. The relaxation times in a Monte Carlo simulator can be obtained as [123]

$\displaystyle \ensuremath{{\tau_i}}=-\ensuremath{n}\frac{\ensuremath{\langle \e...
...eft(\ensuremath{f}\right) {\;}{\;}\mathrm{d}^3 \ensuremath{\mathitbf{k}}}}{\;}.$ (1.141)

$ \ensuremath{\mathcal{E}}_\mathrm{0}$ is the equilibrium energy. Only $ \ensuremath{\mathcal{E}}_\mathrm{0}$ depends on the band structure. For the energy relaxation times, the equilibrium energy $ \ensuremath{\mathcal{E}}_\mathrm{0}$ is defined as [123]

$\displaystyle \ensuremath{\langle \ensuremath{\mathcal{E}}_\mathrm{0} \rangle}=...
...\ensuremath{n}}}}\ensuremath{p}_\mathrm{0}(\ensuremath{T_\ensuremath{n}})}{\;}.$ (1.142)

$ \ensuremath{A}$ is a dimension factor, whereas $ \ensuremath{p}_\mathrm{0}(\ensuremath{T_\ensuremath{n}})$ and $ \ensuremath{p}_\mathrm{1}(\ensuremath{T_\ensuremath{n}})$ are polynomials considering the non-parabolicity of the band structure [123]. The summation index $ i$ runs over all valleys in the material. In the case of silicon the index is in the range of $ \mathrm{1}$ to $ \mathrm{3}$ . In a single valley, the equilibrium energy for three-dimensions can be calculated as

$\displaystyle \ensuremath{\langle \ensuremath{\mathcal{E}}_\mathrm{0}\rangle} =...
...h{T_\ensuremath{n}})}{\ensuremath{p}_\mathrm{1}(\ensuremath{T_\ensuremath{n}})}$ (1.143)

The mobilities for the six moments model can be obtained from the homogeneous macroscopic transport model as

$\displaystyle \ensuremath{\ensuremath{\mu}_{\mathrm{0}}}^\mathrm{d}=\frac{\ensuremath{V_\mathrm{0}}}{\ensuremath{E}}  $$\displaystyle  \ensuremath{\ensuremath{\mu}_{\mathrm{1}}}^\mathrm{d}=\frac{\en...
...{\ensuremath{\ensuremath{w}}_{\mathrm{1}}}\left(1+\ensuremath{A}H_{1}\right)}  $, and $\displaystyle  \ensuremath{\ensuremath{\mu}_{\mathrm{2}}}^{\mathrm{d}}=\frac{\...
...suremath{\ensuremath{w}}_{\mathrm{2}}}\left(1+2\ensuremath{A}H_{2}\right)}{\;}.$ (1.144)

$ \ensuremath{E}$ is the electric field, the index $ \mathrm{d}$ denotes the dimension of the electron gas, and in the numerators of equation (1.144), there are the odd moments of the BTE as explained in the previous section. Hence, the mobilities can be calculated as a post-processing step.


1.4.5 Spherical Harmonics Expansion

The Spherical Harmonics Expansion (SHE) procedure is a numerical method for solving the BTE. It gives approximate deterministic solutions of the BTE by an expansion of the distribution function $ \ensuremath{f(\ensuremath{\ensuremath{\mathitbf{r}}} ,\ensuremath{\ensuremath{\mathitbf{k}}},t)}$ in the $ \ensuremath{\ensuremath{\mathitbf{k}}}$ -space into spherical harmonic functions  $ \ensuremath{Y_{\ensuremath{l}\ensuremath{m}}(\ensuremath{\theta},\ensuremath{\phi})}$  [124]. As will be demonstrated the SHE procedure reproduces the results obtained from MC quite well with less computational effort.

Spherical harmonic functions are defined as [125,126]

$\displaystyle \ensuremath{Y_{\ensuremath{l}\ensuremath{m}}(\ensuremath{\theta},...
...}(\ensuremath{\theta}))} e^{\ensuremath{i}\ensuremath{m}\ensuremath{\phi}}{\;},$ (1.145)

with $ \ensuremath{P^{\ensuremath{m}}_{\ensuremath{l}}(\mathrm{cos}(\ensuremath{\theta}))}$ as the associated Legendre polynomial. The indices $ \ensuremath{l}$ and $ \ensuremath{m}$ are defined in the range $ \ensuremath{l}\in\{0,\infty\}$ and $ \ensuremath{m}\in\{-\ensuremath{l},\ensuremath{l}\}$ respectively. Furthermore, the spherical harmonics are orthogonal,

$\displaystyle \int\limits_{0}^{\pi}\int\limits_{0}^{2\pi}\mathrm{d}{\Omega}{\;}...
...\ensuremath{l}}}{\;}\ensuremath{\delta_{\ensuremath{m}^{'}\ensuremath{m}}}{\;},$ (1.146)

normalized and complex valued [127]. The second term with the primed indices of (1.146) is the conjugate complex term of $ \ensuremath{Y_{\ensuremath{l}\ensuremath{m}}(\ensuremath{\theta},\ensuremath{\phi})}$ , whereas $ \mathrm{d}\Omega$ is defined as $ \mathrm{d}\Omega=\mathrm{sin}{\;}\ensuremath{\theta}{\;}\mathrm{d}\ensuremath{\theta}\mathrm{d}\ensuremath{\phi}$ .

For instance the spherical harmonic functions $ \ensuremath{Y_{00}}$ , $ Y_{10}$ , $ Y_{11}$ , and $ \ensuremath{Y_{20}}$ can be expressed as

$\displaystyle \hspace*{-0.6cm}\ensuremath{Y_{00}}=\sqrt{\frac{1}{4\pi}}$$\displaystyle Y_{10}=-\sqrt{\frac{3}{8\pi}}\mathrm{sin}(\ensuremath{\theta}) e^{\ensuremath{i}\ensuremath{m}\ensuremath{\phi}}$$\displaystyle Y_{11}=\sqrt{\frac{3}{4\pi}}\mathrm{cos}(\ensuremath{\theta}) $, and $\displaystyle \ensuremath{Y_{20}}=\sqrt{\frac{5}{16\pi}}\left(3\mathrm{cos}^2(\ensuremath{\theta})-1\right){\;}.$ (1.147)


$ \ensuremath{Y_{00}}$ is a sphere, while $ Y_{10}$ and $ Y_{11}$ are visualized in Fig. 1.18.
Figure 1.18: $ Y_{10}$ and $ Y_{11}$ in polar coordinates
\includegraphics[width=0.5\textwidth]{figures/simulation/SHE-DD/Y-1-0-spheric.eps} \includegraphics[width=0.5\textwidth]{figures/simulation/SHE-DD/Y-2-0-spheric.eps}

Note that for rotational symmetry in the $ \ensuremath{\phi}$ direction, the spherical harmonic function is reduced to the associated Legendre polynomials.

The distribution function can be now expanded as

$\displaystyle \ensuremath{f}(\ensuremath{r},\ensuremath{k})=\ensuremath{\sum_{\...
...h{Y_{\ensuremath{l}\ensuremath{m}}(\ensuremath{\theta},\ensuremath{\phi})}{\;}.$ (1.148)

The coefficients  $ \ensuremath{f}_{\ensuremath{l}\ensuremath{m}}(\ensuremath{r},\ensuremath{k})$ are given by

$\displaystyle \ensuremath{f}_{\ensuremath{l}\ensuremath{m}}(\ensuremath{r},\ens...
...suremath{l}^{'}\ensuremath{m}^{'}}(\ensuremath{\theta},\ensuremath{\phi})}{\;}.$ (1.149)

The fluxes for a three-dimensional electron gas as defined in (1.83) can be now expressed as

$\displaystyle \ensuremath{n}\ensuremath{\ensuremath{\mathitbf{V_i}}}=\ensuremat...
...h{Y_{\ensuremath{l}\ensuremath{m}}(\ensuremath{\theta},\ensuremath{\phi})}{\;}.$ (1.150)

The next step is to apply the SHE method to the stationary BTE. For the sake of clarity, the transport direction of the carriers is assumed just along the $ \ensuremath{z}$ coordinate and a parabolic band structure is taken into account. Hence, the expansion (1.149) reduces to

$\displaystyle \ensuremath{f}(\ensuremath{z},\ensuremath{k})=\ensuremath{\sum_{\...
...math{k})\ensuremath{P_{\ensuremath{l}}(\mathrm{cos}(\ensuremath{\theta}))}{\;},$ (1.151)

where the angle $ \ensuremath{\theta}$ is specified by the direction of the electric field and the  $ \ensuremath{P_{\ensuremath{l}}(\mathrm{cos}(\ensuremath{\theta}))}$ are the Legendre polynomials. Before substituting the distribution function with spherical harmonics, a variable transformation from $ \ensuremath{k}$ -space into $ \ensuremath{\mathcal{E}}$ -space is performed. The transformation into the $ \ensuremath{\mathcal{E}}$ -space has many advantages, e.g. in equilibrium, the distribution function is isotropic on equienergy surfaces [127]. Inserting the expansion (1.151) into the BTE, one can obtain the BTE expressed by the SHE [128,129]. The first two lowest order expansions can be written as

$\displaystyle \ensuremath{l}=0 \Rightarrow  $ $\displaystyle  \ensuremath{\ensuremath{\partial_{z} \ensuremath{f}_1}} -\mathr...
...\ensuremath{\ensuremath{\partial_{t} \ensuremath{f}_0}}\right)_{\mathrm{coll}},$ (1.152)

$\displaystyle \ensuremath{l}=1 \Rightarrow  $ $\displaystyle  \ensuremath{\ensuremath{\partial_{z} \ensuremath{f}_0}} -2\ensu...
...uremath{\ensuremath{\partial_{t} \ensuremath{f}_1}}\right)_{\mathrm{coll}}{\;}.$ (1.153)

For  $ \ensuremath{N}\rightarrow\infty$ , the result is an exact solution of the BTE. In Fig. 1.20, a velocity profile of an  $ \mathrm {n}^+\mathrm {n}\mathrm {n}^+$ structure calculated with a MC simulation and with SHE simulations taking several Legendre polynomials into account is shown. The result is different from the MC simulation considering just one Legendre polynomial, whereas for at least 9 polynomials, both simulations are in good agreement. Therefore, the SHE simulation is a good benchmark alternative to the Monte Carlo technique considering enough polynomials.

The question may arise about the relation of the SHE to macroscopic models as e.g. the drift-diffusion model. The answer to this question will be discussed in the following.

Assuming a homogeneous, stationary system with an applied electric field $ \ensuremath{\mathitbf{E}}$ , the macroscopic relaxation time approximation on the right hand side of the BTE, parabolic bands, and the diffusion approximation [51], the BTE can be written as

$\displaystyle -\mathrm{q}\ensuremath{\ensuremath{\mathitbf{E}}}\ensuremath{\ens...
...ac{\ensuremath{f}-\ensuremath{f_\mathrm{0}}}{\ensuremath{\tau_\mathrm{0}}}{\;}.$ (1.154)

To derive an expression for the anti-symmetric part of the distribution function  $ \ensuremath{f_\mathrm{a}}$ , which is the important part for systems within non-equilibrium conditions, the distribution function  $ \ensuremath{f}$ can be split into a symmetric part  $ \ensuremath{f_\mathrm{s}}$ and  $ \ensuremath{f_\mathrm{a}}$ . Due to the assumption of the diffusion approximation

$\displaystyle \ensuremath{f_\mathrm{s}}\gg\ensuremath{f_\mathrm{a}}{\;},$ (1.155)

which states that the system is not very far away from equilibrium. The equation (1.154) can now be used to derive an expression for the anti-symmetric part $ \ensuremath{f_\mathrm{a}}$  [1]

$\displaystyle \ensuremath{f_\mathrm{a}}=\mathrm{q}\ensuremath{\tau_\mathrm{0}}\...
...m{\vert}\ensuremath{P_{\ensuremath{1}}(\mathrm{cos}(\ensuremath{\theta}))}{\;}.$ (1.156)

Figure 1.19: A comparison of the low and high-field mobility as a function of the doping in the bulk calculated with the SHE method and the drift-diffusion model. In the SHE simulation, only the first Legendre polynomial has been taken into account.
\includegraphics[width=0.5\textwidth]{rot_figures_left/simulation/SHE-DD/mobility.eps}
Figure 1.20: The velocity profile of an $ \mathrm {n}^+\mathrm {n}\mathrm {n}^+$ structure calculated with a device MC simulation and with a SHE simulator taking 1, 5, 9, and 15  Legendre polynomials (LP) into account.
\includegraphics[width=0.5\textwidth]{rot_figures_left/simulation/transport/nin/velocity_all.eps}

Inserting equation (1.156) into (1.150) the drift term of the drift-diffusion model is obtained.

Hence, taking just the first Legendre polynomial of the SHE, in a homogeneous system within the low-field regime, the results of the SHE are equal to the results from drift-diffusion simulations.

In Fig. 1.19, the low-field and the high-field mobility calculated with a SHE simulator and a standard drift-diffusion model as a function of the doping concentration is shown. In the SHE simulator, only the first Legendre polynomial has been taken into account. The results of the two simulations show a very good agreement in the low-field regime, whereas the assumptions stated above are not valid anymore for high-fields. The situation changes taking more than one Legendre polynomial into account.

As demonstrated in Fig. 1.20, the SHE method is in very good agreement with MC simulations when at least  $ \mathrm{9}$  Legendre polynomials are considered. Due to the good agreement with MC simulations in short channel devices and to the shorter simulation time of the SHE method compared to MC, the SHE method is used as a reference solution for the derived three-dimensional higher-order macroscopic transport models in the next chapter.


next up previous contents
Next: 2. The Three-Dimensional Electron Up: Dissertation Martin-Thomas Vasicek Previous: List of Figures

M. Vasicek: Advanced Macroscopic Transport Models