7.2 Tracing Phase Space – Components from Theory

Where Section 7.1 described the dynamical system due to Boltzmann’s equation using differential equations, an approach using the integral form of Boltzmann’s equation as introduced in Section 5.6 is provided here. The presented integral equation is approached using the formalism described in Appendix A to approximate solutions. In contrast to the contraction to macroscopic quantities, as described in Section 7.1 this approach makes direct use of the phase space and its geometry by casting trajectories through it. As already outlined in Section 5.4, trajectories alone are no longer sufficient for an adequate description but a mechanism to change between trajectories is also required. The theory outlined in Section 5.6 is first theoretically further refined in order to provide guidelines usable for implementation, reusing components from Section 6.

The continuous model of the evolution of the dynamical system described in Section 5.6, needs only minor adaptations in order to be made accessible to the discrete world of digital computers. The adaptation is briefly outlined to provide a guide of how independent components may be assembled to be applied to a dynamical system.

Since Boltzmann’s equation can be cast in the form of an integral equation, it is possible to construct a resolvent series. Recalling Equation 5.51b:

          ∫ t∫     ∫t                                ∫t
ρ(p,q,t) =       e− τ λ(ξ)dξS (¯γ,γ(τ))ρ(¯γ,τ )dp′dτ + e− a λ(ξ)dξρ(a)       (7.6)
           a

Proceeding as outlined in Appendix A by recursive insertions leads to:

∫t∫  t′ ∫  ∫    ∫                          ∫′
          e−  tτ λ(γ1(ξ))dξS(γ2(t′),γ1(t′))e− τt′ λ(γ2(ξ))dξS(γ3(t′′),γ2(t′′))   (7.7)
a a
                               − ∫t′′λ(γ3(ξ))dξ           ′   ′′  ′ ′′
                              e  a         ρ(γ3(a),a)dp dp  dtdt
It contains two scattering events, hence uses three different trajectories (Definition 46), which are connected at their end points by the condition that they agree in their q component using the corresponding parameter, while allowing discontinuous transitions in their p components, as already established in Equation 5.47. Recalling it in adapted form it reads:
  γi(t) = (p, q),                             (7.8a)
γi+1(t) = (p ′,q )                             (7.8b)

Another important fact is that the sequence of trajectories here, actually moves back in time from the point of interest to the initial point. Thus the algorithms utilizing this approach directly are aptly called backward algorithms [130].

The trajectories encountered here correspond to the integral curves (Definition 63) of the Hamiltonian vector field (Definition 62). Each trajectory thus is a representative of the Hamiltonian flow (Definition 67), which governs the evolution of the system as a whole. Thus it becomes apparent that the solution is obtained by tracing the trajectories which follow the basic structures which define the geometry of the problem domain.

In the spirit of applying a Monte Carlo approach for the evaluation of the integrals, as described in Section 6.6.2 the factors

λ(γi(t))
--------,                                   (7.9)
λ(γi(t))
which leave the value unchanged but allow for easier interpretation are introduced. After introduction of these factors and slight reordering the following is obtained:
∫ t∫  t′ ∫  ∫            ∫               ′     ′
             λ(γ1(t′))e− tt′λ(γ1(ξ))dξS(γ2(t),γ1(t-))-              (7.10)
 a   a                               λ(γ1(t′))
                        ∫t′        S(γ (t′′),γ (t′′))
             λ(γ2(t′′))e− t′′λ(γ2(ξ))dξ---3-----′′2-----
                                      λ(γ2(t ))
              − ∫at′′λ(γ3(ξ))dξ             ′  ′′  ′  ′′
             e            ρ(γ3(a),a)dp dp  dt dt
This identifies the terms of the form
           ∫t
λ(γ1(t′))e−  t′λ(γ1(ξ))dξ = pexp(λ (γ1(t′)),t′)                  (7.11)
as the probability distribution function corresponding to the exponential distribution (Definition 103). While it is highly tempting to also identify the terms of the form
S(γ2(t′),γ1(t′))
---------′-----                              (7.12)
   λ(γ1(t))
with a probability distribution (Definition 99), it is not the case as the integration defining λ is not carried out with the corresponding states. However, such a term can easily by introduced recalling Equation 5.43
∫
   S(p, p′,q)dp′ = λ(p, q)                         (7.13)
and analogously setting
∫
  S (p,p ′,q )dp  = ¯λ(p ′,q ).                        (7.14)
Inserting this expression provides
∫
S(p,p′,q)dp ′S (γ2(t′),γ1(t′))    ¯λ(γ2(t′))S (γ2(t′),γ1(t′))   ¯λ (γ2(t′))
′----∫-------′------′ = ------′-----¯----′----- = ------′--pscatt(t′).  (7.15)
λ(γ1(t))      S(p, p ,q)dp     λ(γ1(t))   λ (γ2(t))      λ (γ1(t ))
Thus setting
        S(γ2(t′),γ1(t′))-
pscatt =    ¯λ(γ (t′))                              (7.16)
              2
corresponds to a random variable of uniform distribution (Definition 102). Application of this to Equation 7.10 yields:
∫∫′∫ ∫
tt              ′   ′ ¯λ(γ2(t′))       ′          ′′   ′′ ¯λ(γ3(t′′))      ′′
     pexp(λ(γ1(t )),t )λ(γ-(t′))pscatt(t)pexp(λ (γ2(t)),t )λ(γ-(t′′))pscatt(t )
aa                       1                                2
                                                                     (7.17)
                                    − ∫ta′′λ(γ3(ξ))dξ            ′  ′′ ′  ′′
                                   e             ρ(γ3(a ),a)dp dp dt dt
The final remaining exponential can be expressed as an additional integral over the parameter again representing an exponential distribution, thus finally obtaining
∫ t ∫ t′ ∫ t′′∫  ∫                  ¯     ′
                  pexp(λ (γ1(t′)),t′)λ(γ2(t))pscatt(t′)                   (7.18)
 a   a   a                        λ(γ1(t′))
                                   ¯λ(γ (t′′))
                  pexp(λ (γ2(t′′)),t′′)---3-----pscatt(t′′)
                                   λ(γ2(t′′))
                  pexp(λ (γ3(t′′′)),t′′′)ρ (γ3(a ),a)dp′dp′′dt′dt′′dt′′′.
While this expression is particular to the second term of the iteration, all following terms follow the same pattern and are expressible as a series of multiplications of probabilities along with a corresponding density.

The key components required for the approximation of the described dynamical system can be found from Equation 7.18 to be:

7.2.1 Sections of Trajectories

Trajectories have already been introduced in Section 6.5 under the name of curves which shall be specialized for the context of dynamics. While trajectories corresponding to a Hamiltonian flow (Definition 67) fibrate (Definition 39) the whole phase space, only sections are required for the treatment of the dynamical system described by Boltzmann’s equation.

Nevertheless, the sections of the trajectories under investigation are integral curves (Definition 63) of the Hamiltonian vector field (Definition 62) defined on the phase space. Considering the phase space as the co-tangent bundle leads to a split of this vector field, into two components which are still tangential to the curve. However, the two separated components are no longer each a vector field. Where the component corresponding to q˙ may be immediately identified with a vector field corresponding to the velocity, the part representing  ˙
p is a tensor field (Definition 65) of type   0
𝒯1 (𝒱)  , thus a field of 1 -forms, not a vector field. This field is given by the 1  -forms corresponding to the force acting at each point of the manifold. While the nature of the components seems to have changed when projecting to the base manifold structures, both fields remain to be an expression of the Hamiltonian which defined the symplectic geometry.

The extent of the section of the trajectory traversed corresponds to a random variable using the exponential distribution described in Equation 7.11. A random variable (Definition 98) with the desired properties can be generated from a uniformly distributed random variable by inversion of the distribution function as outlined in Appendix B. Recalling Equation 7.11

     − ∫tt′λ(γ(ξ))dξ
λ(γ)e           ,                              (7 .19)
the cumulative distribution function to this probability density distribution reads
      ∫t
1 − e− t′λ(γ(ξ))dξ.                              (7.20)
Equating this expression with a uniformly distributed random variable ¯r , e.g., from the interval [0;1[  , allows to formulate
         − ∫t′λ(γ(ξ))dξ
¯r = 1 −∫ e  t       ,                            (7.21)
r = e− tt′λ(γ(ξ))dξ.                               (7.22)
The new variable r=  1 − ¯r is again a uniformly distributed random variable from the interval [0;1 [  . This expression can readily be inverted to yield
          ∫ t
                             ′
ln(r) = −  t′ λ (γ (ξ))d ξ = Λ(t) − Λ (t).                  (7.23)

The domain of interest and the quantities associated with it are represented in a discrete manner in the digital computer as described in Section 6.3 and Section 6.4. The discrete nature has repercussions on the validity of the considered trajectory, such that it models only the local flow (Definition 66) instead of the original global flow. The border of validity of any given local trajectory is due to the finite size of the subdivision into cells (Definition 34 and discussed in Section 6.3), which form the base space of the fiber bundle (Definition 40 as implemented in Section 6.4) carrying the quantities which determine the shape of the trajectories.

Since the trajectories are nothing else than curves through phase space, the shape of the the trajectories is completely defined by the coefficients ci  , which were abstractly defined in Section 6.5. Here, they can be associated to physical quantities. c0  corresponds to the position q of a particle, while c1  is linked to p . Finally, c2  represents ˙p . Thus the geometry of the trajectory is still tightly tied to the entities of the original symplectic structure.

The discrete nature makes it necessary to check, if the trajectory leaves the originating cell with any chosen value of the parameter and therefore actually serves as an upper limit for how far the trajectory is traversed, since the trajectory needs to be interrupted in case the parameter carries the new position beyond the initial cell, as already outlined in Section 6.5. Since the trajectories are one-dimensional, it is sufficient to compare the parameters corresponding to the intersection of a trajectory with a current cell’s boundary to the value of the parameter obtained from the random distribution.

7.2.2 Trajectory Hopping

Both descriptions of Boltzmann’s equation as provided in Section 4.9 and Section 5.6 acknowledge the defining nature of the scattering term on the right hand side. Thus the scattering term has to be mapped adequately to the realm of digital computing in order to expect the calculated result to be accurate. Furthermore, the scattering term is, together with the geometry defining Hamiltonian, the point of entry for physical modelling into the mathematical structure. Thus, the inclusion of different scattering mechanisms needs not only to be adequate from a mathematical point of view but also must withstand a physicist’s scrutiny.

The scattering kernel S is therefore modelled as being composed from different scattering mechanisms Si such that

          ∑
S (¯γ,γ) =     Si(¯γ,γ).                           (7.24)
           i
Thus, recalling Equation 7.14
∫       ∫  ∑                ∑  ∫               ∑
S(¯γ,γ)dp =        Si(¯γ,γ)dp =        Si(¯γ,γ )dp  =     ¯λi(¯γ) = ¯λ(¯γ)      (7.25)
            i                i                   i
and by Equation 7.16, which reads
   S(¯γ,γ)    ∑  S  (¯γ, γ)   ∑   ¯λ (¯γ)S (¯γ, γ)   ∑   ¯λ (¯γ)
pscatt=  -------=     --i----- =     -i-----i----- =     -i---pscatt,i       (7.26)
    ¯λ(¯γ)      i   ¯λ(¯γ)      i  ¯λ (γ¯)  ¯λi(¯γ )     i  ¯λ (¯γ )
the probabilities for each of the individual scattering mechanisms p
  scatt,i  can be related to the total scattering probability pscatt  . This relation also allows the selection of a given scattering mechanism by generating a uniformly distributed random variable rscatt  from the interval [0;1[  . Since it is possible to uniquely identify the contribution λi  of every scattering mechanism by its index i , it is possible to select a scattering mechanism by choosing an index iscatt  . When choosing this index iscatt  , the following inequality holds.
is∑catt¯                isca∑tt+1 ¯
    λi(¯γ) ≤ rscatt,i <       λi(¯γ)                      (7.27)
 i  ¯λ (γ¯)              i   ¯λ (¯γ)
Equivalently, it is possible to forgo normalization for the selection, since it is to a common factor ¯λ and expressing it in terms of rates as
is∑catt                 isca∑tt+1
    ¯λi(¯γ) ≤ λscatt,i <       ¯λi(γ¯).                     (7.28)
 i                     i
The nature of continued summation determines the scattering probability pscatt  along with the selection of the scattering mechanism i
 scatt  , which determines the beginning of the new trajectory, by means of the rejection technique as is outlined in Appendix B.

PIC

Figure 7.1: Schematic illustration of the rejection technique to evaluate scattering mechanisms.

Figure 7.1 provides an illustration of this particular case. At a given point x a current rate is chosen randomly using a distribution from the interval [0;1[  multiplied by an upper bound for all the scattering. Then the scattering models are evaluated at this point x and the individual rates, thusly calculated are summed in order to determine the interval λi  , which contains the current rate. The scattering mechanism associated with this interval is then chosen as the new state from which the next trajectory will propagate.

During the simulation the scattering models are evaluated repeatedly in order to choose the next state of the electron under consideration. Therefore high performance is among the requirements placed on the evaluation of the collision model.

7.2.3 Bridging Two Worlds

Applications implementing scattering mechanisms encountered in the field of solid state electronics, which change the states of electrons have been available quite some time. The foundations for the presented code have been been written in plain ANSI C, where all available scattering mechanisms are implemented as individual functions, which are called subsequently. The scattering models require a variable set of parameters, which leads to non-homogeneous interfaces in the functions representing them. To alleviate this to some extent global variables have been employed completely eliminating any aspirations of data encapsulation and posing a serious problem for attempts for parallelization to take advantage of multi-core CPUs. The code has the very simple and repetitive structure:

 double sum = 0; 
 double current_rate = generate_random_number(); 
 
 if (A_key == on) 
 { 
   sum = A_rate(state, parameters); 
 
   if (current_rate < sum) 
   { 
    counter>A[state>valley]++; 
    state_after_A (st, p); 
    return
   } 
 } 
 if (B_key == on) 
 {     ...    } 
 ...

Extensions to this code are usually accomplished by copy and paste, which is prone to simple mistakes by oversight, such as failing to change the counter which has to be incremented or calling the incorrect function to update the electron’s state.

Furthermore, at times the need arises to calculate the sum of all the scattering models (λtotal  in Figure 7.1), which is accomplished in a different part of the implementation, thus further opening the possibility for inconsistencies between the two code paths.

The decision which models to evaluate is performed strictly at run time and it would require significant, if simple, modification of the code to change this at compile time, thus making highly optimized specializations very cumbersome.

The functions calculating the rates and state transitions, however, have been well tested and verified, so that abandoning them would be wasteful.

Object oriented programming directly offers run time polymorphism by means of virtual inheritance. Unfortunately current implementations of inheritance use an intrusive approach for new software components and tightly couple a type and the corresponding operations to the super type. In contrast to object-oriented programming, current applications of generic programming are limited to algorithms using statically and homogeneously typed containers, but offer highly flexible, reusable, and optimizeable software components.

As can be seen, both programming types offer different points of evaluation. Run time polymorphism based on run time concepts [131] tries to combine the virtual inheritance run time modification mechanism with the compile time flexibility and optimization.

Inheritance in the context of run time polymorphism is used to provide an interface template to model the required concept, where the derived class must provide the implementation of the given interface. The following code snippet

template<typename StateT> struct scatter_facade 
{ 
 typedef StateT state_type; 
 boost::shared_ptr<scattering_concept> scattering_object; 
 
 struct scattering_concept 
 { 
  virtual scattering_concept() {} ; 
  virtual numeric_type rate(const state_type& input) const = 0; 
  virtual void transition(state_type& input) = 0; 
 }
 
 template<typename T> struct scattering_model : 
  scattering_concept 
 { 
  T scattering_instance; 
  scattering_model(const T& x) : scattering_instance(x) {} 
  numeric_type rate(const state_type& input) const ; 
  void transition(state_type& input) ; 
 }
 
 numeric_type rate(const state_type& input) const
 void transition(state_type& input) ; 
 template<typename T> 
 scatter_facade(const T& x) : 
     scattering_object(new scattering_model<T>(x)) {} 
 scatter_facade() {} 
};

therefore introduces a scattering_facade which wraps a scattering_concept part. The virtual inheritance is used to configure the necessary interface parts, in this case rate() and transition(), which have to be implemented by any scattering model. In the given example the state_type is still available for explicit parametrization.

To interface this novel approach a core structure is implemented which wraps the implementations of the scattering models:

template<typename ParameterType> 
struct scattering_rate_A 
{ 
 ... 
 const ParameterType& parameters; 
 
 scattering_rate_A (const ParameterType& parameters): 
  parameters(parameters) {} 
 
 template <typename StateType> numeric_type 
 operator() (const StateType& state) const 
 { 
  return A_rate(state, parameters); 
 } 
};

By supplying the required parameters at construction time it is possible to homogenize the interface of the operator(). This methodology also allows the continued use of the old data structures in the initial phases of transition, while not being so constrictive as to hamper future developments.

The functions for the state transitions are treated similarly to those for the rate calculation. Both are then fused in a scattering_pack to form the complete scattering model and to ensure consistency of the rate and state transition calculations as can be seen in the following part of code:

template<scattering_rate_type, transition_type, parameter_type> 
struct scattering_pack 
{ 
 scattering_rate_type<parameter_type>  rate_calculation; 
 transition_type<parameter_type>  state_transition; 
 
 scattering_pack  (const parameter_type& parameters) : 
  rate_calculation(parameters), state_transition(parameters) 
 {} 
 
 template<typename StateType> 
 numeric_type rate(const StateType& state) const 
 { 
  return rate_calculation(state); 
 } 
 
 template<typename StateType> 
 void transition(StateType& state) 
 { 
  state_transition(state); 
 } 
}

The blend of run time and compile time mechanisms allows the storage of all scattering models within a single container, e.g. std::vector, which can be iterated over in order to evaluate them.

typedef std::vector<scatter_facade_t<state_type> > 
        scatter_container_type ; 
 
scatter_container_type scatter_container ; 
scatter_container.push_back(scattering_model) ;

For the development of new collision models easy extendability, even without recompilations, is also a highly important issue. This approach allows the addition of scattering models at run time and to expose an interface to an interpreted language such as, e.g., Python [29].

In case a highly optimized version is desired, the run time container (here the std::vector) may be exchanged by a compile time container, which is also readily available from the GSSE and provides the compiler with further opportunities for optimizations at the expense of run time adaptability.

While the described approach initially slightly increases the burden of implementation, due to the fact that wrappers must be provided, it gives a transition path to integrate legacy codes into an up to date frame while at the same time not abandoning the experience associated with it. The invested effort allows to raise the level of abstraction, which in turn allows to increase the benefits obtained from the advances in compiler technologies. This in turn inherently allows an optimization for several platforms without the need for massive human effort, which was needed in previous approaches.

In this particular case, encapsulating the reliance on global variables of the functions implementing the scattering models to the wrapping structures, parallelization efforts are greatly facilitated, which are increasingly important with the continued increase of computing cores per CPU.

Furthermore the results can easily be verified as code parts are gradually moved to newer implementations, the only stringent requirement being link compatibility with C++. This test and verification can be taken a step further in case the original implementation is written in ANSI C, due to the high compatibility of it to C++. It is possible to weave parts of the new implementation into the older code. Providing the opportunity to get a comparison not only of final results, but of all the intermediates as well.

Such swift verification of implementations allows to also speed up the steps necessary to verify calculated results with subsequent or contemporary experiments, which should not be neglected, in order to keep physical models and their numerical representations strongly rooted in reality.

The correct operation of the run time concepts depends on the faultless operation of the compiler being used within the rules set down by the C++ language specification [132]. Under this crucial assumption the correct compilation of the code ensures that the code will behave according to the guarantees introduced by the run time concept. While this provides an indication for correctness it cannot be considered absolute proof, especially since the C++ language and its compilers does not have a special focus on validation as is the case in, e.g., Ada[133][134], which could certainly make a stronger case.