next up previous contents
Next: 4. Drive Current Optimization Up: 3.2 Optimization Procedure Previous: 3.2.2 SIESTA as the

3.2.3 Optimization Performance

Since many simulation jobs have to be performed during an optimization procedure, it is highly important to utilize all available strategies for speed-up. These include:

The job farming method implemented within SIESTA uses a load balancing strategy which estimates the current load of the machines and finds the one that is likely to need the least computation time for a specific job. Therefore, the total CPU power of the machines in a pool can be used most efficiently.

The gradient steps of the optimization sequence can be run in parallel as the parameter sets for each of these steps are already known at the beginning of the gradient calculations. So the time needed for the optimization can be drastically reduced if there are a lot of computers available in the host pool.

To find the fastest mode for the simulations the physical effects that might influence the optimization result have to be well examined. The device simulation speed, for example, can be increased for a MOS transistor if the current density of the bulk majority carriers (holes in case of an NMOS) is neglected. Of course, this would lead to wrong results if the device was simulated in the accumulation region or if a transient simulation was performed. A majority carrier current is necessary, in this case, to charge or discharge the depletion regions which change during a transient event.

It is well known that a proper initialization of the simulator can speed-up the simulations. The results from a previous optimization step are usually a very good initialization for the actual step especially if the changes in the optimization parameters are quite small. This is true for the gradient calculations where only one parameter is changed at a time and this change is very small. Therefore, if an initialization method is used, the gradient calculations can be performed very fast. But this method means a speed-up for the single steps, too, though the changes in the parameters are much bigger and the initial solution is, therefore, not as well conditioned as for the gradient calculations.

Another important issue besides speed performance is the stability of the optimization procedure. Considering that the failure of only a single simulation task can kill the whole optimization, the framework has to be very robust. Failures which can likely occur are, for example, network failures or operating system failures. Parallel simulations within a pool of different workstations can aggravate the situation drastically since the possibility of failures will increase.

To overcome these problems SIESTA provides task repetition of failed simulation jobs. Whenever a job fails, it will be re-queued for execution. At the same time, an auxiliary parameter repeatlevel will be incremented any time the job is restarted. This parameter can be accessed via the command line or the input deck of the simulator. If simulations have, for example, problems with the convergence, the repeat level can be utilized to control the simulation mode and to make the jobs succeed.


next up previous contents
Next: 4. Drive Current Optimization Up: 3.2 Optimization Procedure Previous: 3.2.2 SIESTA as the
Michael Stockinger
2000-01-05