Ideas http://localhost/index.php/gsoc/2011/ideas-2011 Mon, 25 Jun 2018 10:32:15 +0000 Joomla! - Open Source Content Management en-gb cse@iue.tuwien.ac.at (Computational Science and Engineering at TU Wien) Netgen: Constructive Solid Geometry in 2D http://localhost/index.php/gsoc/2011/ideas-2011/98-netgen-constructive-solid-geometry-in-2d http://localhost/index.php/gsoc/2011/ideas-2011/98-netgen-constructive-solid-geometry-in-2d Description

For the modeling of solids, the constructive solid geometry (CSG) technique is often used. It allows to create models with rather complex surface by boolean operations such as intersection or union of simple objects like balls or cubes.

Netgen provides CSG support for three-dimensional objects already. The task is to add support for the simpler two-dimensional case. We have inherent scientific demand for such a two-dimensional add-on in order to be able to run flexible two-dimensional simulations of lasers or electronics devices.

Benefit for the Student

The student will acquire insight into one of the most popular freely available mesh generators available. The conception of complex geometric structures as a construction of simple objects and operations will be trained.

Benefit for the Project

Many two-dimensional geometries can be specified much faster and easier in Netgen than it is the case by now. This will reduce the setup time in cases where a simulation along a two-dimensional cross-section is sufficient.

Requirements

Moderate skills in object-oriented C++ and file-IO are required.

Mentors

Stefan Rotter, Joachim Schöberl

]]>
cse@iue.tuwien.ac.at (Super User) Ideas2011 Tue, 26 Feb 2013 08:34:44 +0000
NGSolve: GPU Acceleration of Flow Solvers http://localhost/index.php/gsoc/2011/ideas-2011/99-ngsolve-gpu-acceleration-of-flow-solvers http://localhost/index.php/gsoc/2011/ideas-2011/99-ngsolve-gpu-acceleration-of-flow-solvers Description

Computational fluid dynamics (CFD) is a computationally intensive challenge and requires sophisticated algorithms as well as efficient implementations. NGSolve contains several methods for compressible and incompressible flow simulation on complex domains based on modern discontinuous Galerkin methods. We want to explore the benefit of graphics processing units (GPUs) for explicit time stepping methods in order to further reduce execution times of our solvers. The ViennaCL linear algebra library should be used for that purpose in order to support GPUs and multi-core CPUs from different vendors.

Benefit for the Student

The student will acquire insight into modern numerical methods in CFD. Moreover, C++ skills as well as additional GPU computing experience will be improved.

Benefit for the Project

We hope to improve the performance of the flow simulator significantly in order to provide a flexible and efficient simulator to other researchers and engineers all over the world.

Requirements

Background in numerical methods in CFD, good C++ skills, experience in GPU programming

Mentors

Joachim Schöberl, Karl Rupp

]]>
cse@iue.tuwien.ac.at (Super User) Ideas2011 Tue, 26 Feb 2013 08:35:13 +0000
NGSolve: GPU Acceleration of Maxwell Solvers http://localhost/index.php/gsoc/2011/ideas-2011/100-ngsolve-gpu-acceleration-of-maxwell-solvers http://localhost/index.php/gsoc/2011/ideas-2011/100-ngsolve-gpu-acceleration-of-maxwell-solvers Description

NGSolve contains a Discontinuous Galerkin solver for time domain Maxwell equations. The explicit time stepping methods are inherently parallel, and are thus well suited for GPU computing. The ViennaCL linear algebra library should be used for accessing the vast computational resources of GPUs.

Benefit for the Student

The student will learn modern numerical methods in computational electromagnetics.

Benefit for the Project

We hope to improve the performance of the EM simulator significantly.

Requirements

Background in computational electromagnetics, good C++ skills, experience in GPU programming.

Mentors

Joachim Schöberl, Karl Rupp

]]>
cse@iue.tuwien.ac.at (Super User) Ideas2011 Tue, 26 Feb 2013 08:35:24 +0000
ViennaCL, NGSolve: Black-Box Linear Algebra using ViennaCL within NGSolve http://localhost/index.php/gsoc/2011/ideas-2011/101-viennacl-ngsolve-black-box-linear-algebra-using-viennacl-within-ngsolve http://localhost/index.php/gsoc/2011/ideas-2011/101-viennacl-ngsolve-black-box-linear-algebra-using-viennacl-within-ngsolve

Description

The linear algebra kernel in NGSolve is so far restricted to computations on the CPU. On the other hand, ViennaCL provides linear algebra on graphics processing units (GPUs) as a standalone library. The student should add appopriate switches within NGSolve in order to switch between the built-in NGSolve linear algebra and ViennaCL for general (black-box) linear algebra tasks.

Benefit for the Student

The student will gain experience with the pros and cons of general purpose computing on GPUs. A deeper understanding of the involved algorithms the will be obtained.

Benefit for the Projects

On the one hand, simulation times using NGSolve will be reduced considerably for a large class of problems. On the other hand, ViennaCL will gain further feedback from day-to-day use within NGSolve.

Requirements

A solid knowledge of C++ and the build process is required, but no sophisticated language concepts are necessary. The student should be able to handle include files and be familiar with the basic object oriented concepts such as inheritance.

Mentors

Joachim Schöberl, Karl Rupp]]>
cse@iue.tuwien.ac.at (Super User) Ideas2011 Tue, 26 Feb 2013 08:35:32 +0000
ViennaCL: Additional Sparse Matrix Formats http://localhost/index.php/gsoc/2011/ideas-2011/102-viennacl-additional-sparse-matrix-formats http://localhost/index.php/gsoc/2011/ideas-2011/102-viennacl-additional-sparse-matrix-formats

Description

In order to unleash to full computational power of GPUs for sparse matrix-vector products, suitable storage schemes have to be used. The optimum storage scheme, however, depends on the structure of the underlying matrix. A number of different storage formats have been proposed with very different pros and cons. The difference in execution times for sparse matrix-vector products can be easily one order of magnitude if the wrong storage format is chosen. Support for the most commonly used formats such as ELL and HYB (see for instance this paper on sparse matrix formats) should be implemented.

Benefit for the Student

The student will go deep into the architecture of GPUs in order to understand the pros and cons of different matrix storage schemes. A lot of experienced on the latest computing technology is gathered.

Benefit for the Project

ViennaCL will experience significant performance gains with the new sparse matrix storage formats. Almost every solver for partial differential equations using discretizations such as the finite element method will benefit.

Requirements

The student should be familiar with basic linear algebra, i.e. matrices and vectors. Moderate C and C++ knowledge is sufficient. Same background in OpenCL is of advantage, but not necessary as long as programming experience is available. Access to a machine with a mid- to high-range graphics adapter is beneficial, but not mandatory.

Mentors

Karl Rupp, Josef Weinbub]]>
cse@iue.tuwien.ac.at (Super User) Ideas2011 Tue, 26 Feb 2013 08:35:41 +0000
ViennaCL: Dense Gauss-Solver with Pivoting (new!) http://localhost/index.php/gsoc/2011/ideas-2011/106-viennacl-dense-gauss-solver-with-pivoting-new http://localhost/index.php/gsoc/2011/ideas-2011/106-viennacl-dense-gauss-solver-with-pivoting-new

Description

For the solution of a dense system of equation, LU-factorizations (aka. Gauss solver) are typically employed. However, a naive implementation of a Gauss-solver might fail for a large class of matrices and is rather sensitive to numerical noise.

A substantial improvement can be achieved by so-called pivoting. This is essentially nothing but a reordering of the equations. However, it ensures that the Gauss solver succeeds for all regular matrices and reduces the sensitivity with respect to numerical noise considerably. The challenge within this project is to implement a dense Gauss-solver with pivoting for massively parallel architectures such as GPUs using OpenCL.

Benefit for the Student

While LU factorizations are covered theoretically in basic numerics classes, the student will get practical experience within this project. A deep understanding for parallel algorithms will be obtained.

Benefit for the Project

The existing Gauss-solver without pivoting in ViennaCL will be replaced by a numerically much more robust implementation.

Requirements

The student should be familiar with basic linear algebra and LU factorizations. Background in OpenCL (or CUDA) is of advantage. Access to a machine with a mid- to high-range graphics adapter is beneficial, but not mandatory.

Mentors

Karl Rupp, Josef Weinbub]]>
cse@iue.tuwien.ac.at (Super User) Ideas2011 Tue, 26 Feb 2013 08:36:19 +0000
ViennaCL: Eigenvalue Computations on GPUs and multi-core CPUs http://localhost/index.php/gsoc/2011/ideas-2011/103-viennacl-eigenvalue-computations-on-gpus-and-multi-core-cpus http://localhost/index.php/gsoc/2011/ideas-2011/103-viennacl-eigenvalue-computations-on-gpus-and-multi-core-cpus

Description

ViennaCL provides a couple of solvers for systems of equations. For many applications it is in addition desired to compute the eigenvalues of the system matrix. As usual, the method of choice depends on the structure of the system matrix.

Subproject 1: For huge, sparse symmetric matrices the largest eigenvalues can be obtained by the iterative Lanczos' Method, which should be implemented by the student. For non-symmetric systems, Arnoldi's Method should be implemented. If only the largest eigenvalue is of interest, power iteration can be used. Similarly, the smallest eigenvalue can be obtained by the inverse power iteration.

Subproject 2: For dense matrices, the standard eigenvalue decomposition using the QR-method should be implemented. In contrast to the iterative methods outlined above for sparse matrices, the parallelization of the QR-algorithm is not as straightforward, but students who have mastered numerics classes will be able to cope with it.

Benefit for the Student

The student will learn how to identify and make use of parallel branches of eigenvalue computations. Besides familiarity with modern computing architectures, a better understanding of several basic linear algebra operations will be obtained.

Benefit for the Project

The computation of eigenvalues is one of the most important requirements on a linear algebra packages, but not straightforward to parallelize and thus missing in many GPU libraries. The student will fill this blind spot in ViennaCL.

Requirements

As for the required programming skills, basic C and C++ knowledge is sufficient. Same background in OpenCL is of advantage, but not necessary as long as programming experience is available.

Since the debugging of numerical algorithms can be very tedious, we seek for a student who really enjoys to twiddle with numbers in order to implement and verify the numerical algorithms. Basic linear algebra is required. The student should in particular be familiar with eigenvalues and eigenvectors. Access to a machine with a mid- to high-range graphics adapter is beneficial, but not mandatory.

Mentors

Karl Rupp, Josef Weinbub]]>
cse@iue.tuwien.ac.at (Super User) Ideas2011 Tue, 26 Feb 2013 08:35:52 +0000
ViennaCL: MPI Layer for Linear Algebra with Large Matrices (new!) http://localhost/index.php/gsoc/2011/ideas-2011/104-viennacl-mpi-layer-for-linear-algebra-with-large-matrices-new http://localhost/index.php/gsoc/2011/ideas-2011/104-viennacl-mpi-layer-for-linear-algebra-with-large-matrices-new

Description

The memory on a single graphics adapter is typically limited to dense matrices of at most 10.000 by 10.000 entries. However, for many applications much larger matrices need to be handled, which can be achieved by distributing the matrices to multiple computing nodes. On the API level, library users wish to have the distributed data handled automatically, as if the matrix were located on a single GPU.

The student should implement such a distributed matrix type and provide basic linear algebra operations such as matrix additions, matrix-matrix and matrix-vector multiplications for this type. Internally, Boost.MPI should be used.

Benefit for the Student

The student will get hands-on experience in high-performance computing. The challenges of distributed computing will be tackled.

Benefit for the Project

Many linear algebra libraries offering GPU accelerations are limited to shared-memory systems and require that all data fits onto GPUs. ViennaCL will be one of the first open source libraries to support GPU acceleration for large-scale problems.

Requirements

The student should be familiar with basic linear algebra, i.e. matrices and vectors. Moderate C and C++ knowledge is sufficient. Familiarity with MPI is desired. Access to a machine with a mid- to high-range graphics adapter is beneficial, but not mandatory.

Mentors

Josef Weinbub, Karl Rupp]]>
cse@iue.tuwien.ac.at (Super User) Ideas2011 Tue, 26 Feb 2013 08:35:59 +0000
ViennaCL: Sparse Approximate Inverse Preconditioner (new!) http://localhost/index.php/gsoc/2011/ideas-2011/105-viennacl-sparse-approximate-inverse-preconditioner-new http://localhost/index.php/gsoc/2011/ideas-2011/105-viennacl-sparse-approximate-inverse-preconditioner-new

Description

When solving a sparse system of linear equations Ax = b for x by means of iterative methods, the convergence can be improved considerably by formally multiplying the system with a matrix B that is a good approximation to the inverse of A. One possibility is to construct an approximate sparse inverse B by minimizing ||AB - Id|| for a certain sparsity pattern of B, where Id denotes the identity matrix.

The task is to implement a sparse approximate inverse preconditioner in ViennaCL using OpenCL. Since such a preconditioner can be computed in parallel, it is an ideal candidate for GPUs and multi-core CPUs.

Benefit for the Student

The student will appreciate the importance of efficient parallel preconditioners. Moreover, (s)he will learn how to use current multi-core hardware efficiently for linear algebra computations.

Benefit for the Project

In contrast to dense linear algebra, for which a number of GPU implementations is available, sparse linear algebra is often ignored in other libraries. ViennaCL will be the first general-purpose linear algebra library to support a non-trivial parallel preconditioner.

Requirements

The student should be familiar with basic linear algebra, i.e. matrices and vectors in general and QR-decompositions in particular. Moderate C and C++ knowledge is sufficient. Background in OpenCL (or CUDA) is of advantage. Access to a machine with a mid- to high-range graphics adapter is beneficial, but not mandatory.

Mentors

Karl Rupp, Josef Weinbub]]>
cse@iue.tuwien.ac.at (Super User) Ideas2011 Tue, 26 Feb 2013 08:36:09 +0000
ViennaMesh: Aggressive Tetrahedral Mesh Improvement http://localhost/index.php/gsoc/2011/ideas-2011/94-viennamesh-aggressive-tetrahedral-mesh-improvement http://localhost/index.php/gsoc/2011/ideas-2011/94-viennamesh-aggressive-tetrahedral-mesh-improvement

Description

To further improve the quality of the generated tetrahedral volume meshes the open source tool Stellar should be investigated. The student should investigate the source package and develop an interface for ViennaMesh.

Benefit for the Student

The student will get hands-on experience with a state-of-the-art mesh improvement tool and recognize the importance of code modularity in scientific software.

Benefit for the Project

The unified access to different meshing kernels in ViennaMesh will be enriched by the ability to improve mesh-quality as an optional post-processing step.

Requirements

The student should offer skills in generic programming in C++, such as Traits and Tag Dispatching. Additionally basic understanding of interfacing with external C libraries is required.

Mentors

Josef Weinbub, Karl Rupp]]>
cse@iue.tuwien.ac.at (Super User) Ideas2011 Tue, 26 Feb 2013 08:33:10 +0000
ViennaMesh: Distributed Parallel Meshing of Segments http://localhost/index.php/gsoc/2011/ideas-2011/96-viennamesh-distributed-parallel-meshing-of-segments http://localhost/index.php/gsoc/2011/ideas-2011/96-viennamesh-distributed-parallel-meshing-of-segments

Description

Large mesh generation and adaptation tasks for real-world applications such as the simulation of a thigh bone (fermur, see picture) naturally introduce the need for parallelization. For a hull mesh consisting of separate pieces with well-defined interface, a volume mesh for the full structure can be obtained by meshing the individual pieces in parallel by serial meshing kernels such as Netgen. The student should accomplish such a parallel volume meshing step based on a small-scale shared-memory approach using Boost.Thread, and on a large-scale distributed-memory approach using Boost.MPI.

Benefit for the Student

The student will gain valuable experience with the parallelization of the mesh generation process for large scale simulations. Obvious and not-so-obvious differences between distributed and shared memory architectures are explored in a hands-on manner.

Benefit for the Project

ViennaMesh can access the full power of multi-core CPUs, reducing total meshing times to a bare minimum for an a-priori independent interiors of different segments.

Requirements

Good C++ skills are required accompanied by some basic experience in generic programming, like Tag-Dispatching. Basic skills in multi-threaded programming or in MPI programming are required.

Mentors

Josef Weinbub, Dieter Pahr]]>
cse@iue.tuwien.ac.at (Super User) Ideas2011 Tue, 26 Feb 2013 08:33:48 +0000
ViennaMesh: Hull Mesh Decomposition http://localhost/index.php/gsoc/2011/ideas-2011/97-viennamesh-hull-mesh-decomposition http://localhost/index.php/gsoc/2011/ideas-2011/97-viennamesh-hull-mesh-decomposition Description

For many applications such as the simulation of human bones, a volume mesh is constructed out of a hull mesh. Due to the high complexity of the geometry such as for example on the picture provided to the right, it is of advantage to decompose the hull mesh into smaller segments. Such a decomposition is indicated by the coloring in the figure. Volume meshes are then created in a parallel, distributed manner. The student should provide a C++ interface for a hull mesh decomposition based on simple geometric rules. The decomposed hull mesh then serves as input for a distributed parallel meshing engine as described in the idea "ViennaMesh: Distributed Parallel Meshing of Segments"

Benefit for the Student

The student will gain profund knowledge about hull meshes and learn to design a unified interface. Insight into the key aspects of distributed meshing will be gained.

Benefit for the Project

While a single hull mesh allows for a single volume mesher instance, properly decomposed hull meshes can be run on large computing clusters with hundreds of volume mesher instances in parallel. ViennaMesh will become a valuable tool for large-scale simulations with millions to billions of unknowns.

Requirements

Good C++ skills are required accompanied by some basic experience in generic programming, like Tag-Dispatching. Familiarity with the basic principles of mesh generation is of advantage.

Mentors

Josef Weinbub, Dieter Pahr

]]>
cse@iue.tuwien.ac.at (Super User) Ideas2011 Tue, 26 Feb 2013 08:34:16 +0000
ViennaMesh: Optimized Tool Chain http://localhost/index.php/gsoc/2011/ideas-2011/95-viennamesh-optimized-tool-chain http://localhost/index.php/gsoc/2011/ideas-2011/95-viennamesh-optimized-tool-chain

Description

ViennaMesh is based on a set of mesh related tools and algorithms for generation, adaptation, and classification. Due to the diversity of the input geometries there is no global optimal tool chain which results in high-quality meshes for all different input meshes. The student should investigate a generic algorithm to find an optimum for the mesh generation tool chain in regard to arbitrary coupled properties, for example, reducing the mesh element size by simultanously increasing the mesh quality.

Benefit for the Student

The student will get a deep understanding of the different interfaces of meshing kernels and gain experience in developing a common, unified interface for these. Moreover, C++ skills will be polished considerably during the project.

Benefit for the Project

ViennaMesh will become simpler and more convenient to use for library users. In particular, a manual search for optimal parameters by trial and error will be replaced by automatic methods.

Requirements

Aside from skills in generic programming in C++ the student should have some basic experience with certain Boost Libraries, like Fusion and MPL. Techniques like, Traits, Tag Dispatching, and basic Meta-Programming skills are required.

Mentors

Josef Weinbub, Karl Rupp]]>
cse@iue.tuwien.ac.at (Super User) Ideas2011 Tue, 26 Feb 2013 08:33:18 +0000