Description

In order to unleash to full computational power of GPUs for sparse matrix-vector products, suitable storage schemes have to be used. The optimum storage scheme, however, depends on the structure of the underlying matrix. A number of different storage formats have been proposed with very different pros and cons. The difference in execution times for sparse matrix-vector products can be easily one order of magnitude if the wrong storage format is chosen. Support for the most commonly used formats such as ELL and HYB (see for instance this paper on sparse matrix formats) should be implemented.

Benefit for the Student

The student will go deep into the architecture of GPUs in order to understand the pros and cons of different matrix storage schemes. A lot of experienced on the latest computing technology is gathered.

Benefit for the Project

ViennaCL will experience significant performance gains with the new sparse matrix storage formats. Almost every solver for partial differential equations using discretizations such as the finite element method will benefit.

Requirements

The student should be familiar with basic linear algebra, i.e. matrices and vectors. Moderate C and C++ knowledge is sufficient. Same background in OpenCL is of advantage, but not necessary as long as programming experience is available. Access to a machine with a mid- to high-range graphics adapter is beneficial, but not mandatory.

Mentors

Karl Rupp, Josef Weinbub