next up previous contents
Next: 6. Further Applications of Up: 5. PROMIS-NT Application Examples Previous: 5.2 Five Species Phosphorus


5.3 Performance Considerations

Benchmarking a number of simulators to obtain reliable performance numbers is due to different implementations of gridding and discretization schemes a difficult task. In most situations it is impossible to force two different simulators to compute their results with exactly the same models on exactly the same simulation grids. The most interesting performance numbers of the Algorithm Library can be obtained somewhat more easily.

Since the Algorithm Library was put on the PROMIS-NT simulator kernel after the initial release of the first version of the PROMIS-NT simulator, benchmarks with diffusion models which were

could easily be implemented and delivered comparable results with the following characteristics:

No measurable differences were found between model implementations with ``hard
[4] coded'' C++ functions and ``manually written'' C++ classes managed by the Algorithm Library. The apparently required indirection for the evaluation of Algorithm Library based models and the initialization of the Algorithm Library structures caused only minimal overhead which was covered by the inaccuracy of the system timers.

The usage of MDL Model definitions caused runtime overheads in the range of $50$ to $200$ percent. Using just in time compiled MDL Models normally decreased this overhead to $5$ to $10$ percent depending on the amount of optimization effort spent on the hand written Models.

Similar results were obtained by benchmarking the MINIMOS-NT device simulator (Section 6.1). Also no significant differences between different CPUs and compilers could be measured.


next up previous contents
Next: 6. Further Applications of Up: 5. PROMIS-NT Application Examples Previous: 5.2 Five Species Phosphorus
Robert Mlekus
1999-11-14