Chair on Application Specific Computing

 

Currently open positions: PhD


Our research focuses on significant improvements of performance and accuracy in computing through a global optimization across the entire spectrum of numerical methods, algorithm design, software implementation and hardware acceleration. The potential is enormous, as typical scientific applications utilize only 0.1% of the peak performance of today's computers.

Displacements in an object under loadTime skewing algorithmParallel access in AoS and SoA

The layers of this spectrum typically have contradictory requirements and their integration poses many challenges. For example, numerically superior methods expose too little parallelism, bandwidth efficient algorithms convolve the processing of space and time into unmanageable software patterns, high level language abstractions create data layout and composition barriers, and high performance on today's hardware poses strict requirements on parallel execution and data access. High performance and accuracy for the entire application can only be achieved by balancing these requirements across all layers.

Many of these problems are unsolved and there is big potential for important discoveries if one is capable of inventing new numerical methods and algorithms for today's parallel hardware and vice versa adapt the hardware to the needs of the computing. Therefore, we devote much attention to parallel algorithms on HPC hardware (GPU, many-core CPUs, FPGA, custom) in relation to

  • Data representation (mixed-precision, compression, redundancy)
  • Data access (layout, spatial and temporal locality, coalescing)
  • Data structure (AMR, unstructured grids, graphs, adaptivity)
  • Numerical methods (GMG, AMG, Krylov, preconditioners, FMM, multilevel methods, SVD)
  • Graph algorithms (partitioning, coloring, decomposition, BFS, MST)
  • Programming abstractions (CUDA, thrust, PSTL, C++2x, TBB, UPC++)

The research overview has more details on finished and ongoing (+) projects.