3.1 Paradigms

Higher level programming languages may permit higher levels of abstraction, thus easing expression of algorithms. The programming paradigms which have emerged over the years have shaped and influenced the development and evolution of programming languages to a great degree. The paradigms examined more closely here are

3.1.1 Imperative Programming

Imperative programming may be viewed as the very bones on which all other abstractions depend. This programming paradigm uses a sequence of instructions which act on a state to realize algorithms. Thus it is always specified in detail what and how to execute next. The modification of the program state, while convenient is, also an Achilles’ heel, as with increasing size of the program unintended modifications of the state become an increasing problem. In order to address this issue the primitive imperative programming method has been refined to procedural and structured programming paradigms, which attempt to provide more control of the modifications of the program state.

Even in the refined forms the incurred overhead can be limited to a bare minimum as the level of abstraction is relatively low. This was well suited for the situation of scarce computing resources and the lack of mature and powerful tools. Under these circumstances the overall performance, in terms of execution speed or memory consumption is solely dependent on the skill and ingenuity of the programmer, which has resulted in the almost mythical “hand optimized” code.

However, to achieve the desired specifications in such a fashion, the clarity and readability, and thereby the maintainability of the code were sacrificed. Furthermore, the low level of abstraction also hinders portability, as different architectures favour different assumptions to produce efficient execution. To address this effect, implementations were duplicated in order to optimize for different architectures and platforms, which of course makes a mockery of goals such as code reduction or even reuse. The desire for a reuse of code led to further evolution of the idea of structured programming.

3.1.2 Object-Oriented Programming

The object-oriented paradigm [14] may be viewed as an evolution from the structured imperative paradigm. It, on the one hand, addresses the issue of unchecked modification of state by enforcing data encapsulation, thus enforcing changes through defined interfaces. On the other hand it tries to address the issue of code reuseability by providing the mechanism of inheritance.

Both of these notions are attached to an entity called an object. Therefore an object serves as a self contained unit which interacts with the environment via messages. In this fashion it accomplishes a decoupling of the internal implementation within the object and the interaction with the surrounding environment. By this construction object-oriented programming enforces interfaces, which is essential for modular programming. The algorithms are expressed much more by the notion of what is to be done as an interaction and modification of objects, where the details of how are encapsulated to a great extent within the objects themselves.

As already mentioned, the concept of inheritance is added to address issues of code reuse. Inheritance is deployed with the aim of reducing implementation efforts by allowing refinement of already existing objects. Using inheritance and the connected sub typing also makes polymorphic programming available at run time. While the concepts of object-orientation have proved to be invaluable to the development of modular software, its limits also became apparent as the goal of general reusability suffers from the stringent limitations of the required sub typing [15]. Problems in modelling as a hierarchy of inheritance arise as a consequence that objects are not necessarily fit to accommodate the required abstractions, such as in the case of algorithms, which operate on the data structures, represented by the objects, but fail to be classified as objects themselves. Furthermore, the extension of existing codes is often only possible by intrusive means, such as changing the already existing implementations thus not leading to the high degree of reduction of effort as was hoped for.

Compared to the run time environment or compiler required to realize the simple imperative programming paradigm, the object-oriented paradigm’s demands more sophistication as it needs to be able to handle run time dispatches using virtual functions, for instance. Additionally, seemingly simple statements may hide the true complexity encapsulated within the objects. Thus not only is the demand on the tools higher but the programmer also needs to be aware of the implications of the seemingly simple statements in order to achieve desirable levels of performance.

3.1.3 Functional Programming

In contrast to the imperative and object-oriented paradigms, which explicitly formulate algorithms and programs as a sequence of instructions acting on a program state, the functional paradigm [16] uses mathematical functions for this task and forgoes the use of a state altogether [17]. Therefore, there are no mutable variables and no side effects in purely functional programming. As such it is declarative in nature and relies on the language’s environment to produce an imperative representation which can be run on a physical machine. Among the greatest strengths of the functional paradigm is the availability of a strong theoretical framework of lambda calculus [18] for the different implementations.

Since a purely functional description is free of side effects, it is a favourable choice for parallelization, as the description does not contain a state, which would require synchronization. Data related dependence, however, must still be considered in order to ensure correct operation.

As the declarative style connected to the functional paradigm distances itself from the traditional imperative paradigm and its connection to states, input and output operations pose a hurdle which is often addressed in a manner, which is not purely functional. As such functional interdependences may be specified trivially, while the details how these are to be met remain opaque and as a choice to the specific implementation.

Polymorphism, which has to be especially provided in the imperative world, comes naturally to the functional paradigm as no specific assumptions about data types are required, only conceptual requirements must be met.

3.1.4 Generic Programming

Generic programming [19][20][21][22] may be viewed as having been developed in order to further facilitate the goals of code reduction and reusability. While these are among the goals, which lead to the development of object-oriented programming, it may vary quite profoundly in the realization. A major distinction from object-oriented programming, which is focused on data structures, is that it especially allows for a very abstract and orthogonal description of algorithms. It thus requires a separation of the data structures and the algorithms acting on them, such as iterators.

Furthermore, algorithms are aimed to be specified for as broad a class of use cases as possible. To this end it is necessary to abstract the requirements of algorithms to isolate the minimum collection of conceptual requirements. It is possible to provide various specializations for various use cases. New use cases can be accommodated by adding additional specializations. A mechanism to select a particular specialization from among the various possibilities in a non-invasive manner is also required.

This can be accomplished by employing the art of writing programs which generate or modify either themselves or other programs, which is known as meta programming. Tools belonging to this discipline are ubiquitously applied in programming in the guise of compilers, which have the sole purpose of transforming the description using a programming language to an executable representation. While this is an essential task, it also applies to the discipline of generic programming, where it can supply the selection of a fitting specialization. Meta programming may be made available at run time, when it is also referred to as reflection [23] or at compile time, then fittingly also called compile time programming.

When generic programming makes use of meta programming using compile time facilities such as static polymorphism, not only is implementation effort reduced but the resulting run time performance optimized.


Figure 3.3: Level of abstraction of different paradigms.

When considering the level of abstraction of the programming paradigms, a view as in Figure 3.3 presents itself. Imperative is, by the nature of the machines finally employed, the lowest level of abstraction. Functional as well as object-related or object-oriented bring an increase of the level of abstraction. The generic approach is free to combine all of the other approaches in order to meet predetermined goals such as reusability or efficiency. The actual choice of which of the paradigms to use for which task in generic programming is driven by setting additional goals, instead of an inherent demand for procedure.


Figure 3.4: Features offered by programming paradigms from a programmer’s perspective.

The view changes from a programmer’s perspective, if he concerns himself which of the programming paradigms offers a set of features, as sketched in Figure 3.4.

3.1.5 Programming Language for Scientific Computing

A plethora of programming languages has been developed during the history of computers. A language may restrict its facilities to support only a single programming paradigm, such as the purely functional programming languages like Haskell [24] or purely object-oriented languages such as the initial version of Java [25]. It is, however, not uncommon that several paradigms are directly supported in a single programming language, as is the case in Lisp [26] or Python [27 ].

It is also not uncommon for a programming language, to be retrofitted with features, which had not been developed at the time of its initial conception, such as is the case of Fortran [10], which was extended from its purely imperative beginnings to also include object-oriented features. A programming language supporting the use of several programming paradigms is called a multi paradigmatic language. As the run time performance of the resulting executables is of essential importance in the field of scientific computing, the choice of a programming language to implement algorithms cannot go uninfluenced by this characteristic. The combination of the desires for multi paradigmatic facilities, an open specification and availability of mature tools, considerably restricts the options to choose from.

The field of scientific computing is especially demanding, since it requires not only flexibility and adaptability in terms of specification, while at the same time meeting demands for high speed computations. Since different programming paradigms have different affinities for different problems, a simultaneous combination of several paradigms is a natural choice to meet all the conflicting demands at the same time. Thus a target library or program should incorporate support of as many paradigms as possible.

On the other hand, the different areas of application and their differing requirements together with a desire for high level specification has resulted in the development of domain specific languages (DSLs) with their own grammar and vocabulary. Such a DSL provides a means to abstract from hardware and even the currently available programming languages. This comes at the price of increased difficulty of interoperability with existing libraries and programs.

One way to deal with this issue of interoperability presents itself by realizing the DSL not in a stand alone fashion, but by directly embedding it in a host language. The new concept, now called a domain specific embedded language (DSEL), provides the benefits of a DSL, the high semantic level, while it also incorporates interoperable access to all the libraries already available to the host language [28 ].

Table 3.1 shows a comparison of different features in several programming languages. As can be seen from the brief comparison of languages, only a few select languages are available which allow an efficient modelling of operators which are indispensable for the realization of DSELs. Languages such as Haskell were not developed with a focus on high performance, but they still offer the definition of new operators which extend the built-in language syntax.

Language operator overloading parametric polymorphism functors time of evaluation

C no partial 1 no compile/run time
D partial 2 yes no compile/run time
Java no partial 2 no run time
C# partial 2 partial 2 no run time
Haskell yes partial 2 yes run time
C++ yes yes yes compile/run time

Table 3.1: Language comparison.

Performance aspects of application development should be handled orthogonally to the main development of applications. With a multi-language approach performance aspects cannot be easily considered orthogonally because of the use of compiled modules which require an interface layer in order to build applications [29]. The tight integration provided by the use of a DSEL addresses this problem as whole applications may be compiled in a single step. This, of course, puts extensive stress on the tool chain and requires it to have a minimum level of maturity.