Ideas http://localhost/index.php/gsoc/2011/ideas-2011 2018-06-25T10:32:16+00:00 Computational Science and Engineering at TU Wien cse@iue.tuwien.ac.at Joomla! - Open Source Content Management Netgen: Constructive Solid Geometry in 2D 2013-02-26T08:34:44+00:00 2013-02-26T08:34:44+00:00 http://localhost/index.php/gsoc/2011/ideas-2011/98-netgen-constructive-solid-geometry-in-2d Super User cse@iue.tuwien.ac.at <h3><img src="http://localhost/images/static_content/ideas/2011/csg4netgen.png" alt="" align="right" />Description</h3> <p>For the modeling of solids, the <a href="http://en.wikipedia.org/wiki/Constructive_solid_geometry">constructive solid geometry (CSG)</a> technique is often used. It allows to create models with rather complex surface by boolean operations such as intersection or union of simple objects like balls or cubes.</p> <p>Netgen provides CSG support for three-dimensional objects already. The task is to add support for the simpler two-dimensional case. We have inherent scientific demand for such a two-dimensional add-on in order to be able to run flexible two-dimensional simulations of lasers or electronics devices.</p> <h3>Benefit for the Student</h3> <p>The student will acquire insight into one of the most popular freely available mesh generators available. The conception of complex geometric structures as a construction of simple objects and operations will be trained.</p> <h3>Benefit for the Project</h3> <p>Many two-dimensional geometries can be specified much faster and easier in Netgen than it is the case by now. This will reduce the setup time in cases where a simulation along a two-dimensional cross-section is sufficient.</p> <h3>Requirements</h3> <p>Moderate skills in object-oriented C++ and file-IO are required.</p> <h3>Mentors</h3> <p><a href="http://www.iue.tuwien.ac.at/cse/index.html#rotter">Stefan Rotter</a>, <a href="http://www.iue.tuwien.ac.at/cse/index.html#schoeberl">Joachim Schöberl</a></p> <h3><img src="http://localhost/images/static_content/ideas/2011/csg4netgen.png" alt="" align="right" />Description</h3> <p>For the modeling of solids, the <a href="http://en.wikipedia.org/wiki/Constructive_solid_geometry">constructive solid geometry (CSG)</a> technique is often used. It allows to create models with rather complex surface by boolean operations such as intersection or union of simple objects like balls or cubes.</p> <p>Netgen provides CSG support for three-dimensional objects already. The task is to add support for the simpler two-dimensional case. We have inherent scientific demand for such a two-dimensional add-on in order to be able to run flexible two-dimensional simulations of lasers or electronics devices.</p> <h3>Benefit for the Student</h3> <p>The student will acquire insight into one of the most popular freely available mesh generators available. The conception of complex geometric structures as a construction of simple objects and operations will be trained.</p> <h3>Benefit for the Project</h3> <p>Many two-dimensional geometries can be specified much faster and easier in Netgen than it is the case by now. This will reduce the setup time in cases where a simulation along a two-dimensional cross-section is sufficient.</p> <h3>Requirements</h3> <p>Moderate skills in object-oriented C++ and file-IO are required.</p> <h3>Mentors</h3> <p><a href="http://www.iue.tuwien.ac.at/cse/index.html#rotter">Stefan Rotter</a>, <a href="http://www.iue.tuwien.ac.at/cse/index.html#schoeberl">Joachim Schöberl</a></p> NGSolve: GPU Acceleration of Flow Solvers 2013-02-26T08:35:13+00:00 2013-02-26T08:35:13+00:00 http://localhost/index.php/gsoc/2011/ideas-2011/99-ngsolve-gpu-acceleration-of-flow-solvers Super User cse@iue.tuwien.ac.at <h3><img src="http://localhost/images/static_content/ideas/2011/cylinder3d.jpg" alt="" align="right" />Description</h3> <p>Computational fluid dynamics (CFD) is a computationally intensive challenge and requires sophisticated algorithms as well as efficient implementations. NGSolve contains several methods for compressible and incompressible flow simulation on complex domains based on modern discontinuous Galerkin methods. We want to explore the benefit of graphics processing units (GPUs) for explicit time stepping methods in order to further reduce execution times of our solvers. The ViennaCL linear algebra library should be used for that purpose in order to support GPUs and multi-core CPUs from different vendors.</p> <h3>Benefit for the Student</h3> <p>The student will acquire insight into modern numerical methods in CFD. Moreover, C++ skills as well as additional GPU computing experience will be improved.</p> <h3>Benefit for the Project</h3> <p>We hope to improve the performance of the flow simulator significantly in order to provide a flexible and efficient simulator to other researchers and engineers all over the world.</p> <h3>Requirements</h3> <p>Background in numerical methods in CFD, good C++ skills, experience in GPU programming</p> <h3>Mentors</h3> <p><a href="http://www.iue.tuwien.ac.at/cse/index.html#schoeberl">Joachim Schöberl</a>, <a href="http://www.iue.tuwien.ac.at/cse/index.html#rupp">Karl Rupp</a></p> <h3><img src="http://localhost/images/static_content/ideas/2011/cylinder3d.jpg" alt="" align="right" />Description</h3> <p>Computational fluid dynamics (CFD) is a computationally intensive challenge and requires sophisticated algorithms as well as efficient implementations. NGSolve contains several methods for compressible and incompressible flow simulation on complex domains based on modern discontinuous Galerkin methods. We want to explore the benefit of graphics processing units (GPUs) for explicit time stepping methods in order to further reduce execution times of our solvers. The ViennaCL linear algebra library should be used for that purpose in order to support GPUs and multi-core CPUs from different vendors.</p> <h3>Benefit for the Student</h3> <p>The student will acquire insight into modern numerical methods in CFD. Moreover, C++ skills as well as additional GPU computing experience will be improved.</p> <h3>Benefit for the Project</h3> <p>We hope to improve the performance of the flow simulator significantly in order to provide a flexible and efficient simulator to other researchers and engineers all over the world.</p> <h3>Requirements</h3> <p>Background in numerical methods in CFD, good C++ skills, experience in GPU programming</p> <h3>Mentors</h3> <p><a href="http://www.iue.tuwien.ac.at/cse/index.html#schoeberl">Joachim Schöberl</a>, <a href="http://www.iue.tuwien.ac.at/cse/index.html#rupp">Karl Rupp</a></p> NGSolve: GPU Acceleration of Maxwell Solvers 2013-02-26T08:35:24+00:00 2013-02-26T08:35:24+00:00 http://localhost/index.php/gsoc/2011/ideas-2011/100-ngsolve-gpu-acceleration-of-maxwell-solvers Super User cse@iue.tuwien.ac.at <h3>Description</h3> <p>NGSolve contains a Discontinuous Galerkin solver for time domain Maxwell equations. The explicit time stepping methods are inherently parallel, and are thus well suited for GPU computing. The ViennaCL linear algebra library should be used for accessing the vast computational resources of GPUs.</p> <h3>Benefit for the Student</h3> <p>The student will learn modern numerical methods in computational electromagnetics.</p> <h3>Benefit for the Project</h3> <p>We hope to improve the performance of the EM simulator significantly.</p> <h3>Requirements</h3> <p>Background in computational electromagnetics, good C++ skills, experience in GPU programming.</p> <h3>Mentors</h3> <p><a href="http://www.iue.tuwien.ac.at/cse/index.html#schoeberl">Joachim Schöberl</a>, <a href="http://www.iue.tuwien.ac.at/cse/index.html#rupp">Karl Rupp</a></p> <h3>Description</h3> <p>NGSolve contains a Discontinuous Galerkin solver for time domain Maxwell equations. The explicit time stepping methods are inherently parallel, and are thus well suited for GPU computing. The ViennaCL linear algebra library should be used for accessing the vast computational resources of GPUs.</p> <h3>Benefit for the Student</h3> <p>The student will learn modern numerical methods in computational electromagnetics.</p> <h3>Benefit for the Project</h3> <p>We hope to improve the performance of the EM simulator significantly.</p> <h3>Requirements</h3> <p>Background in computational electromagnetics, good C++ skills, experience in GPU programming.</p> <h3>Mentors</h3> <p><a href="http://www.iue.tuwien.ac.at/cse/index.html#schoeberl">Joachim Schöberl</a>, <a href="http://www.iue.tuwien.ac.at/cse/index.html#rupp">Karl Rupp</a></p> ViennaCL, NGSolve: Black-Box Linear Algebra using ViennaCL within NGSolve 2013-02-26T08:35:32+00:00 2013-02-26T08:35:32+00:00 http://localhost/index.php/gsoc/2011/ideas-2011/101-viennacl-ngsolve-black-box-linear-algebra-using-viennacl-within-ngsolve Super User cse@iue.tuwien.ac.at <div class="idea-content"> <h3>Description</h3> <p>The linear algebra kernel in NGSolve is so far restricted to computations on the CPU. On the other hand, ViennaCL provides linear algebra on graphics processing units (GPUs) as a standalone library. The student should add appopriate switches within NGSolve in order to switch between the built-in NGSolve linear algebra and ViennaCL for general (black-box) linear algebra tasks.</p> <h3>Benefit for the Student</h3> <p>The student will gain experience with the pros and cons of general purpose computing on GPUs. A deeper understanding of the involved algorithms the will be obtained.</p> <h3>Benefit for the Projects</h3> <p>On the one hand, simulation times using NGSolve will be reduced considerably for a large class of problems. On the other hand, ViennaCL will gain further feedback from day-to-day use within NGSolve.</p> <h3>Requirements</h3> <p>A solid knowledge of C++ and the build process is required, but no sophisticated language concepts are necessary. The student should be able to handle include files and be familiar with the basic object oriented concepts such as inheritance.</p> <h3>Mentors</h3> <a href="http://www.iue.tuwien.ac.at/cse/index.html#schoeberl">Joachim Schöberl</a>, <a href="http://www.iue.tuwien.ac.at/cse/index.html#rupp">Karl Rupp</a></div> <div class="idea-content"> <h3>Description</h3> <p>The linear algebra kernel in NGSolve is so far restricted to computations on the CPU. On the other hand, ViennaCL provides linear algebra on graphics processing units (GPUs) as a standalone library. The student should add appopriate switches within NGSolve in order to switch between the built-in NGSolve linear algebra and ViennaCL for general (black-box) linear algebra tasks.</p> <h3>Benefit for the Student</h3> <p>The student will gain experience with the pros and cons of general purpose computing on GPUs. A deeper understanding of the involved algorithms the will be obtained.</p> <h3>Benefit for the Projects</h3> <p>On the one hand, simulation times using NGSolve will be reduced considerably for a large class of problems. On the other hand, ViennaCL will gain further feedback from day-to-day use within NGSolve.</p> <h3>Requirements</h3> <p>A solid knowledge of C++ and the build process is required, but no sophisticated language concepts are necessary. The student should be able to handle include files and be familiar with the basic object oriented concepts such as inheritance.</p> <h3>Mentors</h3> <a href="http://www.iue.tuwien.ac.at/cse/index.html#schoeberl">Joachim Schöberl</a>, <a href="http://www.iue.tuwien.ac.at/cse/index.html#rupp">Karl Rupp</a></div> ViennaCL: Additional Sparse Matrix Formats 2013-02-26T08:35:41+00:00 2013-02-26T08:35:41+00:00 http://localhost/index.php/gsoc/2011/ideas-2011/102-viennacl-additional-sparse-matrix-formats Super User cse@iue.tuwien.ac.at <div class="idea-content"> <h3>Description</h3> <p>In order to unleash to full computational power of GPUs for sparse matrix-vector products, suitable storage schemes have to be used. The optimum storage scheme, however, depends on the structure of the underlying matrix. A number of different storage formats have been proposed with very different pros and cons. The difference in execution times for sparse matrix-vector products can be easily one order of magnitude if the wrong storage format is chosen. Support for the most commonly used formats such as ELL and HYB (see for instance <a href="http://www.nvidia.com/object/nvidia_research_pub_001.html">this paper on sparse matrix formats</a>) should be implemented.</p> <h3>Benefit for the Student</h3> <p>The student will go deep into the architecture of GPUs in order to understand the pros and cons of different matrix storage schemes. A lot of experienced on the latest computing technology is gathered.</p> <h3>Benefit for the Project</h3> <p>ViennaCL will experience significant performance gains with the new sparse matrix storage formats. Almost every solver for partial differential equations using discretizations such as the finite element method will benefit.</p> <h3>Requirements</h3> <p>The student should be familiar with basic linear algebra, i.e. matrices and vectors. Moderate C and C++ knowledge is sufficient. Same background in OpenCL is of advantage, but not necessary as long as programming experience is available. Access to a machine with a mid- to high-range graphics adapter is beneficial, but not mandatory.</p> <h3>Mentors</h3> <a href="http://www.iue.tuwien.ac.at/cse/index.html#rupp">Karl Rupp</a>, <a href="http://www.iue.tuwien.ac.at/cse/index.html#weinbub">Josef Weinbub</a></div> <div class="idea-content"> <h3>Description</h3> <p>In order to unleash to full computational power of GPUs for sparse matrix-vector products, suitable storage schemes have to be used. The optimum storage scheme, however, depends on the structure of the underlying matrix. A number of different storage formats have been proposed with very different pros and cons. The difference in execution times for sparse matrix-vector products can be easily one order of magnitude if the wrong storage format is chosen. Support for the most commonly used formats such as ELL and HYB (see for instance <a href="http://www.nvidia.com/object/nvidia_research_pub_001.html">this paper on sparse matrix formats</a>) should be implemented.</p> <h3>Benefit for the Student</h3> <p>The student will go deep into the architecture of GPUs in order to understand the pros and cons of different matrix storage schemes. A lot of experienced on the latest computing technology is gathered.</p> <h3>Benefit for the Project</h3> <p>ViennaCL will experience significant performance gains with the new sparse matrix storage formats. Almost every solver for partial differential equations using discretizations such as the finite element method will benefit.</p> <h3>Requirements</h3> <p>The student should be familiar with basic linear algebra, i.e. matrices and vectors. Moderate C and C++ knowledge is sufficient. Same background in OpenCL is of advantage, but not necessary as long as programming experience is available. Access to a machine with a mid- to high-range graphics adapter is beneficial, but not mandatory.</p> <h3>Mentors</h3> <a href="http://www.iue.tuwien.ac.at/cse/index.html#rupp">Karl Rupp</a>, <a href="http://www.iue.tuwien.ac.at/cse/index.html#weinbub">Josef Weinbub</a></div> ViennaCL: Dense Gauss-Solver with Pivoting (new!) 2013-02-26T08:36:19+00:00 2013-02-26T08:36:19+00:00 http://localhost/index.php/gsoc/2011/ideas-2011/106-viennacl-dense-gauss-solver-with-pivoting-new Super User cse@iue.tuwien.ac.at <div class="idea-content"> <h3>Description</h3> <p>For the solution of a dense system of equation, LU-factorizations (aka. Gauss solver) are typically employed. However, a naive implementation of a Gauss-solver might fail for a large class of matrices and is rather sensitive to numerical noise.</p> <p>A substantial improvement can be achieved by so-called pivoting. This is essentially nothing but a reordering of the equations. However, it ensures that the Gauss solver succeeds for all regular matrices and reduces the sensitivity with respect to numerical noise considerably. The challenge within this project is to implement a dense Gauss-solver with pivoting for massively parallel architectures such as GPUs using OpenCL.</p> <h3>Benefit for the Student</h3> <p>While LU factorizations are covered theoretically in basic numerics classes, the student will get practical experience within this project. A deep understanding for parallel algorithms will be obtained.</p> <h3>Benefit for the Project</h3> <p>The existing Gauss-solver without pivoting in ViennaCL will be replaced by a numerically much more robust implementation.</p> <h3>Requirements</h3> <p>The student should be familiar with basic linear algebra and LU factorizations. Background in OpenCL (or CUDA) is of advantage. Access to a machine with a mid- to high-range graphics adapter is beneficial, but not mandatory.</p> <h3>Mentors</h3> <a href="http://www.iue.tuwien.ac.at/cse/index.html#rupp">Karl Rupp</a>, <a href="http://www.iue.tuwien.ac.at/cse/index.html#weinbub">Josef Weinbub</a></div> <div class="idea-content"> <h3>Description</h3> <p>For the solution of a dense system of equation, LU-factorizations (aka. Gauss solver) are typically employed. However, a naive implementation of a Gauss-solver might fail for a large class of matrices and is rather sensitive to numerical noise.</p> <p>A substantial improvement can be achieved by so-called pivoting. This is essentially nothing but a reordering of the equations. However, it ensures that the Gauss solver succeeds for all regular matrices and reduces the sensitivity with respect to numerical noise considerably. The challenge within this project is to implement a dense Gauss-solver with pivoting for massively parallel architectures such as GPUs using OpenCL.</p> <h3>Benefit for the Student</h3> <p>While LU factorizations are covered theoretically in basic numerics classes, the student will get practical experience within this project. A deep understanding for parallel algorithms will be obtained.</p> <h3>Benefit for the Project</h3> <p>The existing Gauss-solver without pivoting in ViennaCL will be replaced by a numerically much more robust implementation.</p> <h3>Requirements</h3> <p>The student should be familiar with basic linear algebra and LU factorizations. Background in OpenCL (or CUDA) is of advantage. Access to a machine with a mid- to high-range graphics adapter is beneficial, but not mandatory.</p> <h3>Mentors</h3> <a href="http://www.iue.tuwien.ac.at/cse/index.html#rupp">Karl Rupp</a>, <a href="http://www.iue.tuwien.ac.at/cse/index.html#weinbub">Josef Weinbub</a></div> ViennaCL: Eigenvalue Computations on GPUs and multi-core CPUs 2013-02-26T08:35:52+00:00 2013-02-26T08:35:52+00:00 http://localhost/index.php/gsoc/2011/ideas-2011/103-viennacl-eigenvalue-computations-on-gpus-and-multi-core-cpus Super User cse@iue.tuwien.ac.at <div class="idea-content"> <h3>Description</h3> <p>ViennaCL provides a couple of solvers for systems of equations. For many applications it is in addition desired to compute the eigenvalues of the system matrix. As usual, the method of choice depends on the structure of the system matrix.</p> <p><b>Subproject 1</b>: For huge, sparse symmetric matrices the largest eigenvalues can be obtained by the iterative Lanczos' Method, which should be implemented by the student. For non-symmetric systems, Arnoldi's Method should be implemented. If only the largest eigenvalue is of interest, power iteration can be used. Similarly, the smallest eigenvalue can be obtained by the inverse power iteration.</p> <p><b>Subproject 2</b>: For dense matrices, the standard eigenvalue decomposition using the QR-method should be implemented. In contrast to the iterative methods outlined above for sparse matrices, the parallelization of the QR-algorithm is not as straightforward, but students who have mastered numerics classes will be able to cope with it.</p> <h3>Benefit for the Student</h3> <p>The student will learn how to identify and make use of parallel branches of eigenvalue computations. Besides familiarity with modern computing architectures, a better understanding of several basic linear algebra operations will be obtained.</p> <h3>Benefit for the Project</h3> <p>The computation of eigenvalues is one of the most important requirements on a linear algebra packages, but not straightforward to parallelize and thus missing in many GPU libraries. The student will fill this blind spot in ViennaCL.</p> <h3>Requirements</h3> <p>As for the required programming skills, basic C and C++ knowledge is sufficient. Same background in OpenCL is of advantage, but not necessary as long as programming experience is available.</p> <p>Since the debugging of numerical algorithms can be very tedious, we seek for a student who really enjoys to twiddle with numbers in order to implement and verify the numerical algorithms. Basic linear algebra is required. The student should in particular be familiar with eigenvalues and eigenvectors. Access to a machine with a mid- to high-range graphics adapter is beneficial, but not mandatory.</p> <h3>Mentors</h3> <a href="http://www.iue.tuwien.ac.at/cse/index.html#rupp">Karl Rupp</a>, <a href="http://www.iue.tuwien.ac.at/cse/index.html#weinbub">Josef Weinbub</a></div> <div class="idea-content"> <h3>Description</h3> <p>ViennaCL provides a couple of solvers for systems of equations. For many applications it is in addition desired to compute the eigenvalues of the system matrix. As usual, the method of choice depends on the structure of the system matrix.</p> <p><b>Subproject 1</b>: For huge, sparse symmetric matrices the largest eigenvalues can be obtained by the iterative Lanczos' Method, which should be implemented by the student. For non-symmetric systems, Arnoldi's Method should be implemented. If only the largest eigenvalue is of interest, power iteration can be used. Similarly, the smallest eigenvalue can be obtained by the inverse power iteration.</p> <p><b>Subproject 2</b>: For dense matrices, the standard eigenvalue decomposition using the QR-method should be implemented. In contrast to the iterative methods outlined above for sparse matrices, the parallelization of the QR-algorithm is not as straightforward, but students who have mastered numerics classes will be able to cope with it.</p> <h3>Benefit for the Student</h3> <p>The student will learn how to identify and make use of parallel branches of eigenvalue computations. Besides familiarity with modern computing architectures, a better understanding of several basic linear algebra operations will be obtained.</p> <h3>Benefit for the Project</h3> <p>The computation of eigenvalues is one of the most important requirements on a linear algebra packages, but not straightforward to parallelize and thus missing in many GPU libraries. The student will fill this blind spot in ViennaCL.</p> <h3>Requirements</h3> <p>As for the required programming skills, basic C and C++ knowledge is sufficient. Same background in OpenCL is of advantage, but not necessary as long as programming experience is available.</p> <p>Since the debugging of numerical algorithms can be very tedious, we seek for a student who really enjoys to twiddle with numbers in order to implement and verify the numerical algorithms. Basic linear algebra is required. The student should in particular be familiar with eigenvalues and eigenvectors. Access to a machine with a mid- to high-range graphics adapter is beneficial, but not mandatory.</p> <h3>Mentors</h3> <a href="http://www.iue.tuwien.ac.at/cse/index.html#rupp">Karl Rupp</a>, <a href="http://www.iue.tuwien.ac.at/cse/index.html#weinbub">Josef Weinbub</a></div> ViennaCL: MPI Layer for Linear Algebra with Large Matrices (new!) 2013-02-26T08:35:59+00:00 2013-02-26T08:35:59+00:00 http://localhost/index.php/gsoc/2011/ideas-2011/104-viennacl-mpi-layer-for-linear-algebra-with-large-matrices-new Super User cse@iue.tuwien.ac.at <div class="idea-content"> <h3>Description</h3> <p>The memory on a single graphics adapter is typically limited to dense matrices of at most 10.000 by 10.000 entries. However, for many applications much larger matrices need to be handled, which can be achieved by distributing the matrices to multiple computing nodes. On the API level, library users wish to have the distributed data handled automatically, as if the matrix were located on a single GPU.</p> <p>The student should implement such a distributed matrix type and provide basic linear algebra operations such as matrix additions, matrix-matrix and matrix-vector multiplications for this type. Internally, Boost.MPI should be used.</p> <h3>Benefit for the Student</h3> <p>The student will get hands-on experience in high-performance computing. The challenges of distributed computing will be tackled.</p> <h3>Benefit for the Project</h3> <p>Many linear algebra libraries offering GPU accelerations are limited to shared-memory systems and require that all data fits onto GPUs. ViennaCL will be one of the first open source libraries to support GPU acceleration for large-scale problems.</p> <h3>Requirements</h3> <p>The student should be familiar with basic linear algebra, i.e. matrices and vectors. Moderate C and C++ knowledge is sufficient. Familiarity with MPI is desired. Access to a machine with a mid- to high-range graphics adapter is beneficial, but not mandatory.</p> <h3>Mentors</h3> <a href="http://www.iue.tuwien.ac.at/cse/index.html#weinbub">Josef Weinbub</a>, <a href="http://www.iue.tuwien.ac.at/cse/index.html#rupp">Karl Rupp</a></div> <div class="idea-content"> <h3>Description</h3> <p>The memory on a single graphics adapter is typically limited to dense matrices of at most 10.000 by 10.000 entries. However, for many applications much larger matrices need to be handled, which can be achieved by distributing the matrices to multiple computing nodes. On the API level, library users wish to have the distributed data handled automatically, as if the matrix were located on a single GPU.</p> <p>The student should implement such a distributed matrix type and provide basic linear algebra operations such as matrix additions, matrix-matrix and matrix-vector multiplications for this type. Internally, Boost.MPI should be used.</p> <h3>Benefit for the Student</h3> <p>The student will get hands-on experience in high-performance computing. The challenges of distributed computing will be tackled.</p> <h3>Benefit for the Project</h3> <p>Many linear algebra libraries offering GPU accelerations are limited to shared-memory systems and require that all data fits onto GPUs. ViennaCL will be one of the first open source libraries to support GPU acceleration for large-scale problems.</p> <h3>Requirements</h3> <p>The student should be familiar with basic linear algebra, i.e. matrices and vectors. Moderate C and C++ knowledge is sufficient. Familiarity with MPI is desired. Access to a machine with a mid- to high-range graphics adapter is beneficial, but not mandatory.</p> <h3>Mentors</h3> <a href="http://www.iue.tuwien.ac.at/cse/index.html#weinbub">Josef Weinbub</a>, <a href="http://www.iue.tuwien.ac.at/cse/index.html#rupp">Karl Rupp</a></div> ViennaCL: Sparse Approximate Inverse Preconditioner (new!) 2013-02-26T08:36:09+00:00 2013-02-26T08:36:09+00:00 http://localhost/index.php/gsoc/2011/ideas-2011/105-viennacl-sparse-approximate-inverse-preconditioner-new Super User cse@iue.tuwien.ac.at <div class="idea-content"> <h3>Description</h3> <p>When solving a sparse system of linear equations Ax = b for x by means of iterative methods, the convergence can be improved considerably by formally multiplying the system with a matrix B that is a good approximation to the inverse of A. One possibility is to construct an approximate sparse inverse B by minimizing ||AB - Id|| for a certain sparsity pattern of B, where Id denotes the identity matrix.</p> <p>The task is to implement a sparse approximate inverse preconditioner in ViennaCL using OpenCL. Since such a preconditioner can be computed in parallel, it is an ideal candidate for GPUs and multi-core CPUs.</p> <h3>Benefit for the Student</h3> <p>The student will appreciate the importance of efficient parallel preconditioners. Moreover, (s)he will learn how to use current multi-core hardware efficiently for linear algebra computations.</p> <h3>Benefit for the Project</h3> <p>In contrast to dense linear algebra, for which a number of GPU implementations is available, sparse linear algebra is often ignored in other libraries. ViennaCL will be the first general-purpose linear algebra library to support a non-trivial parallel preconditioner.</p> <h3>Requirements</h3> <p>The student should be familiar with basic linear algebra, i.e. matrices and vectors in general and QR-decompositions in particular. Moderate C and C++ knowledge is sufficient. Background in OpenCL (or CUDA) is of advantage. Access to a machine with a mid- to high-range graphics adapter is beneficial, but not mandatory.</p> <h3>Mentors</h3> <a href="http://www.iue.tuwien.ac.at/cse/index.html#rupp">Karl Rupp</a>, <a href="http://www.iue.tuwien.ac.at/cse/index.html#weinbub">Josef Weinbub</a></div> <div class="idea-content"> <h3>Description</h3> <p>When solving a sparse system of linear equations Ax = b for x by means of iterative methods, the convergence can be improved considerably by formally multiplying the system with a matrix B that is a good approximation to the inverse of A. One possibility is to construct an approximate sparse inverse B by minimizing ||AB - Id|| for a certain sparsity pattern of B, where Id denotes the identity matrix.</p> <p>The task is to implement a sparse approximate inverse preconditioner in ViennaCL using OpenCL. Since such a preconditioner can be computed in parallel, it is an ideal candidate for GPUs and multi-core CPUs.</p> <h3>Benefit for the Student</h3> <p>The student will appreciate the importance of efficient parallel preconditioners. Moreover, (s)he will learn how to use current multi-core hardware efficiently for linear algebra computations.</p> <h3>Benefit for the Project</h3> <p>In contrast to dense linear algebra, for which a number of GPU implementations is available, sparse linear algebra is often ignored in other libraries. ViennaCL will be the first general-purpose linear algebra library to support a non-trivial parallel preconditioner.</p> <h3>Requirements</h3> <p>The student should be familiar with basic linear algebra, i.e. matrices and vectors in general and QR-decompositions in particular. Moderate C and C++ knowledge is sufficient. Background in OpenCL (or CUDA) is of advantage. Access to a machine with a mid- to high-range graphics adapter is beneficial, but not mandatory.</p> <h3>Mentors</h3> <a href="http://www.iue.tuwien.ac.at/cse/index.html#rupp">Karl Rupp</a>, <a href="http://www.iue.tuwien.ac.at/cse/index.html#weinbub">Josef Weinbub</a></div> ViennaMesh: Aggressive Tetrahedral Mesh Improvement 2013-02-26T08:33:10+00:00 2013-02-26T08:33:10+00:00 http://localhost/index.php/gsoc/2011/ideas-2011/94-viennamesh-aggressive-tetrahedral-mesh-improvement Super User cse@iue.tuwien.ac.at <div class="idea-content"> <h3>Description</h3> <p>To further improve the quality of the generated tetrahedral volume meshes the open source tool <a href="http://www.cs.berkeley.edu/%7Ejrs/stellar/">Stellar</a> should be investigated. The student should investigate the source package and develop an interface for ViennaMesh.</p> <h3>Benefit for the Student</h3> <p>The student will get hands-on experience with a state-of-the-art mesh improvement tool and recognize the importance of code modularity in scientific software.</p> <h3>Benefit for the Project</h3> <p>The unified access to different meshing kernels in ViennaMesh will be enriched by the ability to improve mesh-quality as an optional post-processing step.</p> <h3>Requirements</h3> <p>The student should offer skills in generic programming in C++, such as Traits and Tag Dispatching. Additionally basic understanding of interfacing with external C libraries is required.</p> <h3>Mentors</h3> <a href="http://www.iue.tuwien.ac.at/cse/index.html#weinbub">Josef Weinbub</a>, <a href="http://www.iue.tuwien.ac.at/cse/index.html#rupp">Karl Rupp</a></div> <div class="idea-content"> <h3>Description</h3> <p>To further improve the quality of the generated tetrahedral volume meshes the open source tool <a href="http://www.cs.berkeley.edu/%7Ejrs/stellar/">Stellar</a> should be investigated. The student should investigate the source package and develop an interface for ViennaMesh.</p> <h3>Benefit for the Student</h3> <p>The student will get hands-on experience with a state-of-the-art mesh improvement tool and recognize the importance of code modularity in scientific software.</p> <h3>Benefit for the Project</h3> <p>The unified access to different meshing kernels in ViennaMesh will be enriched by the ability to improve mesh-quality as an optional post-processing step.</p> <h3>Requirements</h3> <p>The student should offer skills in generic programming in C++, such as Traits and Tag Dispatching. Additionally basic understanding of interfacing with external C libraries is required.</p> <h3>Mentors</h3> <a href="http://www.iue.tuwien.ac.at/cse/index.html#weinbub">Josef Weinbub</a>, <a href="http://www.iue.tuwien.ac.at/cse/index.html#rupp">Karl Rupp</a></div> ViennaMesh: Distributed Parallel Meshing of Segments 2013-02-26T08:33:48+00:00 2013-02-26T08:33:48+00:00 http://localhost/index.php/gsoc/2011/ideas-2011/96-viennamesh-distributed-parallel-meshing-of-segments Super User cse@iue.tuwien.ac.at <div class="idea-content"> <h3><img src="http://localhost/images/static_content/ideas/2011/femur.png" alt="" align="right" />Description</h3> <p>Large mesh generation and adaptation tasks for real-world applications such as the simulation of a thigh bone (<i>fermur</i>, see picture) naturally introduce the need for parallelization. For a hull mesh consisting of separate pieces with well-defined interface, a volume mesh for the full structure can be obtained by meshing the individual pieces in parallel by serial meshing kernels such as Netgen. The student should accomplish such a parallel volume meshing step based on a small-scale shared-memory approach using Boost.Thread, and on a large-scale distributed-memory approach using Boost.MPI.</p> <h3>Benefit for the Student</h3> <p>The student will gain valuable experience with the parallelization of the mesh generation process for large scale simulations. Obvious and not-so-obvious differences between distributed and shared memory architectures are explored in a hands-on manner.</p> <h3>Benefit for the Project</h3> <p>ViennaMesh can access the full power of multi-core CPUs, reducing total meshing times to a bare minimum for an a-priori independent interiors of different segments.</p> <h3>Requirements</h3> <p>Good C++ skills are required accompanied by some basic experience in generic programming, like Tag-Dispatching. Basic skills in multi-threaded programming or in MPI programming are required.</p> <h3>Mentors</h3> <a href="http://www.iue.tuwien.ac.at/cse/index.html#weinbub">Josef Weinbub</a>, <a href="http://www.iue.tuwien.ac.at/cse/index.html#pahr">Dieter Pahr</a></div> <div class="idea-content"> <h3><img src="http://localhost/images/static_content/ideas/2011/femur.png" alt="" align="right" />Description</h3> <p>Large mesh generation and adaptation tasks for real-world applications such as the simulation of a thigh bone (<i>fermur</i>, see picture) naturally introduce the need for parallelization. For a hull mesh consisting of separate pieces with well-defined interface, a volume mesh for the full structure can be obtained by meshing the individual pieces in parallel by serial meshing kernels such as Netgen. The student should accomplish such a parallel volume meshing step based on a small-scale shared-memory approach using Boost.Thread, and on a large-scale distributed-memory approach using Boost.MPI.</p> <h3>Benefit for the Student</h3> <p>The student will gain valuable experience with the parallelization of the mesh generation process for large scale simulations. Obvious and not-so-obvious differences between distributed and shared memory architectures are explored in a hands-on manner.</p> <h3>Benefit for the Project</h3> <p>ViennaMesh can access the full power of multi-core CPUs, reducing total meshing times to a bare minimum for an a-priori independent interiors of different segments.</p> <h3>Requirements</h3> <p>Good C++ skills are required accompanied by some basic experience in generic programming, like Tag-Dispatching. Basic skills in multi-threaded programming or in MPI programming are required.</p> <h3>Mentors</h3> <a href="http://www.iue.tuwien.ac.at/cse/index.html#weinbub">Josef Weinbub</a>, <a href="http://www.iue.tuwien.ac.at/cse/index.html#pahr">Dieter Pahr</a></div> ViennaMesh: Hull Mesh Decomposition 2013-02-26T08:34:16+00:00 2013-02-26T08:34:16+00:00 http://localhost/index.php/gsoc/2011/ideas-2011/97-viennamesh-hull-mesh-decomposition Super User cse@iue.tuwien.ac.at <h3><img src="http://localhost/images/static_content/ideas/2011/distributed-trabecular-bone-biopsy.png" alt="" align="right" />Description</h3> <p>For many applications such as the simulation of human bones, a volume mesh is constructed out of a hull mesh. Due to the high complexity of the geometry such as for example on the picture provided to the right, it is of advantage to decompose the hull mesh into smaller segments. Such a decomposition is indicated by the coloring in the figure. Volume meshes are then created in a parallel, distributed manner. The student should provide a C++ interface for a hull mesh decomposition based on simple geometric rules. The decomposed hull mesh then serves as input for a distributed parallel meshing engine as described in the idea "ViennaMesh: Distributed Parallel Meshing of Segments"</p> <h3>Benefit for the Student</h3> <p>The student will gain profund knowledge about hull meshes and learn to design a unified interface. Insight into the key aspects of distributed meshing will be gained.</p> <h3>Benefit for the Project</h3> <p>While a single hull mesh allows for a single volume mesher instance, properly decomposed hull meshes can be run on large computing clusters with hundreds of volume mesher instances in parallel. ViennaMesh will become a valuable tool for large-scale simulations with millions to billions of unknowns.</p> <h3>Requirements</h3> <p>Good C++ skills are required accompanied by some basic experience in generic programming, like Tag-Dispatching. Familiarity with the basic principles of mesh generation is of advantage.</p> <h3>Mentors</h3> <p><a href="http://www.iue.tuwien.ac.at/cse/index.html#weinbub">Josef Weinbub</a>, <a href="http://www.iue.tuwien.ac.at/cse/index.html#pahr">Dieter Pahr</a></p> <h3><img src="http://localhost/images/static_content/ideas/2011/distributed-trabecular-bone-biopsy.png" alt="" align="right" />Description</h3> <p>For many applications such as the simulation of human bones, a volume mesh is constructed out of a hull mesh. Due to the high complexity of the geometry such as for example on the picture provided to the right, it is of advantage to decompose the hull mesh into smaller segments. Such a decomposition is indicated by the coloring in the figure. Volume meshes are then created in a parallel, distributed manner. The student should provide a C++ interface for a hull mesh decomposition based on simple geometric rules. The decomposed hull mesh then serves as input for a distributed parallel meshing engine as described in the idea "ViennaMesh: Distributed Parallel Meshing of Segments"</p> <h3>Benefit for the Student</h3> <p>The student will gain profund knowledge about hull meshes and learn to design a unified interface. Insight into the key aspects of distributed meshing will be gained.</p> <h3>Benefit for the Project</h3> <p>While a single hull mesh allows for a single volume mesher instance, properly decomposed hull meshes can be run on large computing clusters with hundreds of volume mesher instances in parallel. ViennaMesh will become a valuable tool for large-scale simulations with millions to billions of unknowns.</p> <h3>Requirements</h3> <p>Good C++ skills are required accompanied by some basic experience in generic programming, like Tag-Dispatching. Familiarity with the basic principles of mesh generation is of advantage.</p> <h3>Mentors</h3> <p><a href="http://www.iue.tuwien.ac.at/cse/index.html#weinbub">Josef Weinbub</a>, <a href="http://www.iue.tuwien.ac.at/cse/index.html#pahr">Dieter Pahr</a></p> ViennaMesh: Optimized Tool Chain 2013-02-26T08:33:18+00:00 2013-02-26T08:33:18+00:00 http://localhost/index.php/gsoc/2011/ideas-2011/95-viennamesh-optimized-tool-chain Super User cse@iue.tuwien.ac.at <div class="idea-content"> <h3>Description</h3> <p>ViennaMesh is based on a set of mesh related tools and algorithms for generation, adaptation, and classification. Due to the diversity of the input geometries there is no global optimal tool chain which results in high-quality meshes for all different input meshes. The student should investigate a generic algorithm to find an optimum for the mesh generation tool chain in regard to arbitrary coupled properties, for example, reducing the mesh element size by simultanously increasing the mesh quality.</p> <h3>Benefit for the Student</h3> <p>The student will get a deep understanding of the different interfaces of meshing kernels and gain experience in developing a common, unified interface for these. Moreover, C++ skills will be polished considerably during the project.</p> <h3>Benefit for the Project</h3> <p>ViennaMesh will become simpler and more convenient to use for library users. In particular, a manual search for optimal parameters by trial and error will be replaced by automatic methods.</p> <h3>Requirements</h3> <p>Aside from skills in generic programming in C++ the student should have some basic experience with certain Boost Libraries, like Fusion and MPL. Techniques like, Traits, Tag Dispatching, and basic Meta-Programming skills are required.</p> <h3>Mentors</h3> <a href="http://www.iue.tuwien.ac.at/cse/index.html#weinbub">Josef Weinbub</a>, <a href="http://www.iue.tuwien.ac.at/cse/index.html#rupp">Karl Rupp</a></div> <div class="idea-content"> <h3>Description</h3> <p>ViennaMesh is based on a set of mesh related tools and algorithms for generation, adaptation, and classification. Due to the diversity of the input geometries there is no global optimal tool chain which results in high-quality meshes for all different input meshes. The student should investigate a generic algorithm to find an optimum for the mesh generation tool chain in regard to arbitrary coupled properties, for example, reducing the mesh element size by simultanously increasing the mesh quality.</p> <h3>Benefit for the Student</h3> <p>The student will get a deep understanding of the different interfaces of meshing kernels and gain experience in developing a common, unified interface for these. Moreover, C++ skills will be polished considerably during the project.</p> <h3>Benefit for the Project</h3> <p>ViennaMesh will become simpler and more convenient to use for library users. In particular, a manual search for optimal parameters by trial and error will be replaced by automatic methods.</p> <h3>Requirements</h3> <p>Aside from skills in generic programming in C++ the student should have some basic experience with certain Boost Libraries, like Fusion and MPL. Techniques like, Traits, Tag Dispatching, and basic Meta-Programming skills are required.</p> <h3>Mentors</h3> <a href="http://www.iue.tuwien.ac.at/cse/index.html#weinbub">Josef Weinbub</a>, <a href="http://www.iue.tuwien.ac.at/cse/index.html#rupp">Karl Rupp</a></div>