next up previous index
Next: 3.4 Monte Carlo Method Up: 3.3 Master Equation Method Previous: 3.3 Master Equation Method

3.3.1 Krylov Subspace Approximate of the Matrix Exponential Operator

    Decomposing a matrix X into X=SBS-1makes it possible to write the matrix exponential of X as
\begin{gather}e^{\boldsymbol{X}}=\sum_{k=0}^{\infty}\frac{\boldsymbol{X}^k}{k!}=...
...{S}^{-1}}{k!}=
\boldsymbol{S}e^{\boldsymbol{B}}\boldsymbol{S}^{-1}
\end{gather}

A particular decomposition is the  Jordan canonical form which is for a matrix X
\begin{gather}\boldsymbol{X} = \boldsymbol{E}\begin{pmatrix}
\boldsymbol{J_1} &...
...8cm}\raisebox{0.3cm}{$\emptyset$ }}}}& & & \lambda_i \end{pmatrix},
\end{gather}
where Ji are the Jordan blocks and $\lambda_i$ are the eigenvalues of X. A Jordan block Ji has typically a dimension of one. This is the case if the algebraic multiplicity and the geometric multiplicity of $\lambda_i$ are equal. Otherwise the dimension of the Jordan block increases by the difference of algebraic to geometric multiplicity. If all Jordan blocks have dimension one the matrix is said to be non-defective or diagonalizable. Using the Jordan canonical form the    exponential of Xt is given by
\begin{gather}e^{\boldsymbol{X}t}=\boldsymbol{E}\begin{pmatrix}
e^{\boldsymbol{...
...pace{0.8cm}\raisebox{0.3cm}{$\emptyset$ }}}}& & & & 1\end{pmatrix}.
\end{gather}

One needs to calculate the exponential of the transition rate matrix $\boldsymbol {\Gamma }$ (see(3.19)). Due to the special structure of $\boldsymbol {\Gamma }$ with main diagonal elements $\Gamma_{ii}=-\sum_{i\neq j}\Gamma_{ij}$ it can be shown by Gershgorin's circle theorem that all its eigenvalues lie in the left half-plane and one eigenvalue is zero (see left side of Fig. 3.8).

  
Figure: Eigenvalue spectrum of $\boldsymbol {\Gamma }$ and $\boldsymbol{\hat{\Gamma}}=\boldsymbol{\Gamma}+\max\vert\Gamma_{ii}\vert\boldsymbol{I}$; $\rho $ denotes the spectral radius. The grey disc is Gershgorin's disc in which all eigenvalues lie.
\includegraphics{eigenvalues.eps}

It is known that Krylov subspace methods tend to provide good approximations for the eigenvalues located in the outermost part of the spectrum [98]. Since all eigenvalues of $\boldsymbol {\Gamma }$ lie in the left half-plane, the eigenvalues which dominate the exponential function, $\lambda_{\text{max}}$, are located in the inner part of the spectrum. Thus the eigenvalue spectrum of $\boldsymbol {\Gamma }$ has to be transformed to make the eigenvalues which dominate the exponential function coincide with the eigenvalues which are filtered out by the Krylov subspace method. An inversion is not possible since $\boldsymbol {\Gamma }$is singular. One can move the eigenvalue spectrum to the right with $\boldsymbol{\Gamma}t=\boldsymbol{\hat{\Gamma}}t-\max\vert\Gamma_{ii}\vert t\boldsymbol{I}$ (see right side of Fig. 3.8). In addition the upper limit of the spectral radius derived from Gershgorin's disc theorem is reduced by a factor of two. Since the commutative law $\boldsymbol{\hat{\Gamma}I}=\boldsymbol{I\hat{\Gamma}}$ applies, one may write
\begin{gather}e^{\boldsymbol{\Gamma}t}=e^{\boldsymbol{\hat{\Gamma}}t-\max\vert\G...
...}=
e^{\boldsymbol{\hat{\Gamma}}t}e^{-\max\vert\Gamma_{ii}\vert t}.
\end{gather}

The transformed matrix $\boldsymbol{\hat{\Gamma}}$ has its eigenvalues which dominate the exponential function in the outermost part of its spectrum. Thus, a Krylov subspace method will give good results even for small dimension subspaces.

The objective is the computation of an approximation of the form $a_{m-1}(\boldsymbol{\hat{\Gamma}}t)\boldsymbol{p}$ to the matrix exponential operation $e^{\boldsymbol{\hat{\Gamma}}t}\boldsymbol{p}$, where $a_{m-1}(\boldsymbol{\hat{\Gamma}}t)$ is a polynomial of degree m-1 in $\boldsymbol{\hat{\Gamma}}t$, which is a linear combination of the vectors $\boldsymbol{p},\boldsymbol{\hat{\Gamma}}t\boldsymbol{p},\ldots,
(\boldsymbol{\hat{\Gamma}}t)^{m-1}\boldsymbol{p}$, and thus is an element of the Krylov subspace [98]
\begin{gather}K_m(\boldsymbol{\hat{\Gamma}}t,\boldsymbol{p}) \equiv
\text{span...
...,
\ldots,(\boldsymbol{\hat{\Gamma}}t)^{m-1}\boldsymbol{p}\right\}.
\end{gather}
Constructing an  orthonormal basis $\boldsymbol{B_m}=[\boldsymbol{b_1},\boldsymbol{b_2},\ldots,\boldsymbol{b_m}]$ in the Krylov subspace, and choosing $\boldsymbol{b_1}=\boldsymbol{p}/\Vert\boldsymbol{p}\Vert _2$, one may write using the identity B
mBmT=I
\begin{gather}e^{\boldsymbol{\hat{\Gamma}}t}\boldsymbol{p}=\boldsymbol{B_mB_m}^T...
...\boldsymbol{B_m}\right]\Vert\boldsymbol{p}\Vert _2\boldsymbol{e_1},
\end{gather}
where e
1 is the unit vector $(1,0,0,\ldots,0)$. The purpose of the Krylov subspace approach, namely to project the exponential of a large matrix approximately onto a small Krylov subspace, is accomplished by approximating $\boldsymbol{B_m}^Te^{\boldsymbol{\hat{\Gamma}}t}\boldsymbol{B_m}$ with $e^{\boldsymbol{B_m}^T\boldsymbol{\hat{\Gamma}}t\boldsymbol{B_m}}=e^{\boldsymbol{H_m}t}$. This gives the approximation
\begin{gather}e^{\boldsymbol{\hat{\Gamma}}t}\boldsymbol{p} \approx
\Vert\boldsymbol{p}\Vert _2\boldsymbol{B_m}e^{\boldsymbol{H_m}t}\boldsymbol{e_1},
\end{gather}
which still involves the evaluation of the exponential of a matrix, but this time of small dimension m and of a particular structure, namely upper Hessenberg. B
m and Hm can be computed by the Arnoldi algorithm (see Appendix H), which is a modified Gram-Schmidt orthogonalization. Y. Saad [98] shows that an a priori error bound exists.
\begin{gather}\left\Vert e^{\boldsymbol{\hat{\Gamma}}t}\boldsymbol{p}-\Vert\bold...
...mma}}t\Vert _2)^m
e^{\Vert\boldsymbol{\hat{\Gamma}}t\Vert _2}}{m!}
\end{gather}


next up previous index
Next: 3.4 Monte Carlo Method Up: 3.3 Master Equation Method Previous: 3.3 Master Equation Method

Christoph Wasshuber