Erasmus Langer
Siegfried Selberherr
Oskar Baumgartner
Markus Bina
Hajdin Ceric
Johann Cervenka
Raffaele Coppeta
Lado Filipovic
Lidija Filipovic
Wolfgang Gös
Klaus-Tibor Grasser
Hossein Karamitaheri
Hans Kosina
Hiwa Mahmoudi
Alexander Makarov
Mahdi Moradinasab
Mihail Nedjalkov
Neophytos Neophytou
Roberto Orio
Dmitry Osintsev
Mahdi Pourfath
Florian Rudolf
Franz Schanovsky
Anderson Singulani
Zlatan Stanojevic
Viktor Sverdlov
Stanislav Tyaginov
Michael Waltl
Josef Weinbub
Yannick Wimmer
Thomas Windbacher
Wolfhard Zisser

Josef Weinbub
Dipl.-Ing.
weinbub(!at)iue.tuwien.ac.at
Biography:
Josef Weinbub studied electrical engineering and microelectronics at the Technische Universität Wien, where he received the degree of Diplomingenieur in 2009. He is currently working on his doctoral degree, where his scientific interests are in the field of scientific computing, with a special focus on algorithms and datastructures, modern programming techniques, and high-performance computing.

ViennaX: Task and Data Parallelism for Scientific Computing

The field of scientific computing is based on modeling various physical phenomena. A promising approach to improve the quality of this modeling is to combine highly specialized simulation tools and is already common practice in fields like Computational Fluid Dynamics (CFD). In short, important tasks are to couple simulations which model relevant phenomena on a different physical level, thus performing multiphysics computations. Although several multiphysics tools are publicly available, the implementations are typically based on assumptions with respect to the field of application.
The available frameworks applied in the field of distributed high-performance scientific computing usually focus on the data parallel approach based on the Message Passing Interface (MPI). Typically, a mesh datastructure representing the simulation domain is distributed, thus the solution is locally evaluated at the individual subdomains. This approach is referred to as domain decomposition, and is reflected by a data parallel approach. As such, the tasks to be executed by the framework are typically processed in sequence, whereas each plugin itself utilizes the MPI to distribute the work among the compute units, for example, to utilize an MPI-based linear solver.
Our free open source framework, ViennaX, does not restrict itself to a specifc execution behavior, in fact its focus is on providing an extendible set of different schedulers to not only support data parallel approaches, but also serial and task parallel implementations. In this context, serial execution refers to the execution of the tasks on a shared-memory machine, enabling sequential execution of tasks, however, the plugins can indeed have parallelized implementations based on, for example, OpenMP or CUDA. Such an approach becomes more and more important, due to the broad availability of multi-core CPUs by simultaneously stagnating clock frequencies. In contrast, task parallel approaches can be used to parallelize data flow applications, for instance, wave front or digital logic simulations.
Our goal is to not only provide the general scientific community with a domain independent plugin execution framework, but also to apply ViennaX to the field of semiconductor device simulation. Utilizing ViennaX not only allows us to implement flexible device simulations, due to the reusable component approach, but also enables high-performance implementations, because of the data and task parallelism support.


Comparison between a ViennaX and a reference implementation of the strong scaling behavior of a data parallel finite element method application. The minor run-time overhead of ViennaX is justified by significantly increased flexibility.


Home | Activities | Staff | Publications | Sponsors | Music | Contact Us