Impact of Charge Transitions at Atomic Defect Sites on Electronic Device Performance
3.2 Molecular dynamics
Molecular dynamics (MD) is a computational method, developed to simulate the motion of interacting particles over time in the framework of Newton’s equations of motion. For certain tasks, such as modeling the dynamics of atoms over a long time scale or for large system sizes, the description of interatomic forces with DFT can become impractically expensive, especially when employed in conjunction with hybrid functionals. In particular when quantum-mechanical interactions are negligible, more simplistic approaches such as classical MD might be sufficient. Classical MD is, for example, commonly used to simulate biological processes or calculate macroscopic thermodynamic properties of materials. The foundations of MD will be described in the following.
3.2.1 Newton’s equation of motion
The dynamics of particles are governed by the fundamental Newtonian equations of motion,
where
The dynamics of an atomic system during an MD simulation are analyzed by solving Eq. (3.16) numerically. A commonly used method for this is the velocity Verlet algorithm [190, 191]. In this approach, the positions
with the acceleration
Within an iterative treatment, first, the initial velocities and positions of the particles in the system at
3.2.2 Controlling the temperature
During the MD simulations, various thermodynamic properties of the atomic system can be kept constant to simulate realistic experimental conditions. The respective constraint properties define which statistical ensemble is
sampled during the MD run [193]. For instance, canonical ensembles (
In this thesis, MD calculations are used to generate amorphous structures by simulating a melt-and-quench procedure [156, 53, 54, 55, 194]. During this process, the system is first heated above its melting point, where it is equilibrated until the positions of the atoms fully randomize in the liquid phase.
Subsequently, the melt is slowly cooled down to room temperature where it transitions into an amorphous solid. The temperature of the
Several approaches for thermostat algorithms have been proposed. The simplest method rescales the velocity of the particles according to the desired kinetic energy. The Berendsen thermostat couples the system to a thermal bath,
which allows for temperature fluctuations [195]. The Bussi-Donadio-Parrinello thermostat stochastically rescales the velocity by enforcing a kinetic energy which follows a
Maxwell-Boltzmann distribution at canonical equilibrium [193]. The thermostat used for the melt-and-quench MD simulations in this thesis is the Langevin
thermostat, which is based on the fundamental Langevin equation of motion. This equation was originally formulated in 1908 by Paul Langevin to model the random motion of particles in a fluid or a gas [196], but can as well be used to model
where
During a quenching process of the melt-and-quench simulation to create amorphous structures, the desired temperature at each time step is linearly interpolated between the initial and final temperature. Although additionally
applying a barostat would improve the description of realistic volume-temperature trends in the liquid phase, this would also significantly increase the computational costs [198]. Instead, to obtain realistic amorphous structures, the initial mass densities before the simulations were adapted to agree with experimental densities of amorphous thin
films and the remaining stress of the quenched structures was released with DFT cell optimizations after the MD simulations. This procedure was used in this thesis to create amorphous silicon nitride (
3.2.3 Interatomic potentials
The remaining term in Eq. (3.16) and Eq. (3.19),
Classical interatomic potential
Classical IPs use non-quantum mechanical descriptions to model the interactions between atoms. These potentials are often referred to as empirical potentials, because they do not capture quantum mechanical interactions, but are parametrized to describe and predict chemical reactions or physical properties based on experiments and higher-level calculations. Classical IPs often include various terms to model different physical phenomena. Their simplified description of atomic interactions makes them computationally efficient and suitable for the simulation of large system sizes.
The classical IP that was used for several MD calculations in this thesis is ReaxFF [199]. This IP divides the system’s energy into various two-body and many-body contributions, including terms related to the bond order, over- and undercoordination, valence angle, torsion angle, conjugation effects, nonbonded van der Waals interactions and Coulomb interactions.
Since the first ReaxFF IP was developed, multiple ReaxFF force fields have been published for different atomic compositions. This is necessary because the parametrization of the respective terms has to be done individually for each atomic system being investigated. A major strength of ReaxFF is its ability to model both bond formation and breaking during the MD run, which is crucial for generating realistic amorphous structures with MD using the melt-and-quench technique [15, 4].
Machine-learning interatomic potential (MLIP)
An alternative approach for defining an IP that aims to combine the efficiency of classical MD with the accuracy of DFT is the MLIP. In this method, an ML algorithm is used to train an IP based on accurately calculated properties from various atomic configurations. These properties includes energies, forces and stress tensors, which are typically obtained at the DFT level. Several approaches have been proposed for the underlying ML model. In this thesis, the Gaussian approximation potential (GAP) [200] method is used to generate the MLIP. Alternative options to train MLIPs include artificial neural networks [201], message-passing network [202] and moment tensor potentials [203].
The GAP potential learns the PES (see Section 2.1.2) of a system as a function of the atomic positions. Thereby, different mathematical representations of atomic environments, known as descriptors, are used to link the training dataset composed of various atomic structures with the ML model. Descriptors relate the positions of atoms to energies on the PES and must be invariant under rotation and translation of the entire atomic configuration and permutations of atoms of the same type. Local descriptors depend on the neighboring atomic environment within a cutoff radius. GAP employs a set of local descriptors for the training of the PES, neglecting long-range interactions beyond the cutoff. In addition to simple geometrical 2-body and 3-body descriptors, such as bond lengths or angles, GAP also uses the more sophisticated smooth overlap of atomic positions (SOAP) descriptor [204]. SOAP captures the local environment around a single atom by considering the positions of neighboring atoms and approximates the atomic neighbor density with Gaussian functions. It is designed to provide a smooth and differentiable description of the PES.
The total energy,
Here,
The kernel compares the input structure
Training of an MLIP
In addition to the total energies and atomic positions of the structures, the atomic forces and stress tensors, which are derivatives of the total energy with respect to the atomic positions and the lattice deformation [206], can also be incorporated into the GAP training process. Including the stress tensor in the training set is important to make the MLIP applicable to structures with different
mass densities. The ability of the MLIP to predict properties from DFT improves systematically with the size of the training dataset
The training process of the MLIP is done in an iterative manner as schematically shown for amorphous structure creation in Fig. 3.2 [194].
Initial amorphous structures are generated by simulating a melt-and-quench procedure with a classical force field and their forces, energies and stress tensors are subsequently recalculated with DFT. The resulting properties, along with the according atomic positions of the respective structures, serve as the training dataset for the first generation of the GAP. During the training process, the fitting coefficients of Eq. (3.22) are optimized to minimize the energy difference of the GAP with the input data. Subsequently, MD simulations are performed with this GAP to create new amorphous structures. The forces, energies and stress tensors of these structures are again recalculated with DFT and added to the training dataset. These steps are iteratively repeated until sufficient agreement with DFT results and experimental data is achieved. MLIPs can also be trained for dynamic processes such as the thermal oxidation of Si [207].
When early generations of the GAP are employed for MD melt-and-quench simulations, the generated structures may exhibit unphysical behavior such as clusters and chains of atoms or structural inhomogeneities. After each
training iteration, the GAP learns more about the atomic interactions due to the enlarged training dataset, improving its physical description to approach the one of DFT. The similarity between the final GAP and DFT is shown in
a correlation plot in Fig. 3.3, where the energies (a) and forces (b) of 1000 different liquid and amorphous Si
The sample structures in this analysis were not part of the training data of the GAP and were randomly selected from an MD run. The mean absolute errors (MAE) are 8 meV/atom for the total energies and 0.26 eV/Å for the forces, respectively, which is in the range of other GAP MLIPs in literature. While these errors might seem significant, the goal of the GAP is not to compare the absolute values of total energies with DFT, since these are arbitrary. More importantly, because the GAP description of forces and energies correlates linearly with DFT and shows no systematic error, it is a qualitatively accurate representation of full quantum-mechanical calculations of structural properties.