1. Introduction

IN the early 1970s, at the beginning of the age of microelectronics, the integration density was approximately two transistors per square millimeter [2]. This density is very low compared to integration densities of today's microelectronic devices [3,2]. With the continuous miniaturization of semiconductor devices and a nearly constant power dissipation per transistor, the power densities on microelectronic chips have increased in much the same way as the devices have shrunk. Therefore, thermal issues have become more and more important for the design of state-of-the-art devices determined by the maximum thermal budget for the device fabrication process and for the device operation.

The thermal budget can be influenced by the choice of the materials and chemical reactions that are used to form the microelectronic device structure. The increase of temperature causes changes in chemical reactions and growth rates as well as an increase of the electrical resistance and also broadening of doping profiles. Obviously, finding the appropriate temperature is crucial for all parts in device fabrication and device operation. Therefore it is important to investigate and predict the influence of the temperature to determine the process window for fabrication and the operation window to run the device.

The main goal of today's and future microelectronic designs is to determine the produced heat and heat flow in the device as well as to estimate the temperature and its impact on the device characteristics and its consequences for surrounding materials. In addition to the thermal requirements for fabrication and operation, the occurring self-heating can be critical if the heat flow through the device from the heat source to the heat sink cannot be controlled appropriately. This phenomenon can often be observed in devices with very high power densities.

Hence, thermal effects are becoming the dominant factor which determine the maximum performance of integrated circuits due to limited heat transport to the heat sink. The temperature dependence has been neglected for a long time, but today for certain characteristic parameters of microelectronic devices and technologies, these formerly neglected of self-heating effects in the simplified models have to be adapted. Hence, these parameters are becoming functions of temperature and their maximum heat transport capabilities. For example the maximum clock frequency is mostly determined by the temperature increase induced by the increased switching currents and the enhancement of integration density, which produce a higher power dissipation. Hence, the surrounding devices are also heated by this additional power density. Due to the temperature increase, also the volume expansions of the materials have to be considered which result in changes of the crystal structure and therefore also in the band structure of the semiconductor material. This effect can increase or decrease the carrier mobility depending on the direction of the mechanical stress. Thus, the complexity and the maximum integration density of integrated circuits is limited by electrical, thermal, and mechanical constraints of material properties.

To achieve the present and future goals proposed by the International Technology Roadmap for Semiconductors (ITRS) [3], faster devices have to be created which need less chip area and less power to operate. Therefore, imminent thermal effects [4,5] have to be considered carefully in order to keep right on track. Since the physics behind these effects is very complex and is only rudimentary implemented, more rigorous physics-based models have to be applied which increase the computational effort tremendously. Rigorous investigations of problems which are of industrial interest are therefore limited by the finite resources both in time and CPU power.

However, despite of these limitations, the time-to-market has to be reduced in order to position new microelectronic products at an early stage for an advantageous market share. Therefore, experiments have to be carried out either with measurements or simulations to achieve the required quality criteria. The faster the developments can be achieved and verified, the faster the market entry and therefore, the higher the earnings are for the first development stage. Hence, rapid results can be obtained with a combination of simulations and experiments, where the simulations give an overview on the intrinsic material parameters and quantity distributions, while the measurements ensure that the overall functionality remains with in the specifications. Measurements of material parameters of real devices cost a lot of money and time because it takes the whole time of a complete fabrication process before the measurements can be performed. With simulation tools, the same experiments can be performed within a very small fraction of the time a new device fabrication process would take. Moreover, the simulation results can be calibrated in order to perfectionize the prediction from the simulation. Again, these new and better results can be used to develop better simulation models which have to be calibrated as well for each new technology. The main benefit from this procedure is that once the simulation has been calibrated to the technology, the prediction gained from the simulation results supports the engineers during the design and the fabrication process. Therefore, physics-based models have to be developed, that allow the calculation of sufficiently accurate simulation results within reasonable time for the evaluation. For such purposes, but especially for thermal problems, new approaches or enhancements of existing models are needed in order to detect hot spots due to heat accumulation, heat conduction paths, and heat fluxes as well as other phenomena that coincide with the increase of the temperature. In order to obtain accurate simulation results within reasonable time, improved numerical methods are required as well.

Stefan Holzer 2007-11-19