5.3 Performance Considerations

Benchmarking a number of simulators to obtain reliable performance numbers
is due to different implementations of gridding and discretization schemes
a difficult task. In most situations it is impossible to force two
different simulators to compute their results with exactly the same models
on exactly the same simulation grids. The most interesting performance
numbers of the *Algorithm Library* can be obtained somewhat more easily.

Since the *Algorithm Library* was put on the *PROMIS-NT* simulator kernel after the initial
release of the first version of the *PROMIS-NT* simulator, benchmarks with
diffusion models which were

- implemented by ``hard coded''
*C++*functions in the original*PROMIS-NT*application, - implemented by using the
*Algorithm Library*interfaces and manually written*C++*classes, - implemented by using
*MDL*definitions on the input deck of the simulator, - and implemented by utilizing the
*MDL*just in time compiler mechanism (Section C)

could easily be implemented and delivered comparable results with the following characteristics:

No measurable differences were found between model implementations with
``hard

[4] coded'' *C++* functions and ``manually written'' *C++* classes
managed by the *Algorithm Library*. The apparently required indirection for the
evaluation of *Algorithm Library* based models and the initialization of the *Algorithm Library*
structures caused only minimal overhead which was covered by the inaccuracy
of the system timers.

The usage of *MDL* ` Model` definitions caused runtime overheads in the
range of
to
percent. Using just in time compiled

Similar results were obtained by benchmarking the *MINIMOS-NT* device simulator
(Section 6.1). Also no significant differences between
different *CPU*s and compilers could be measured.

1999-11-14