Together with the Computational Engineering Research Center of the TU Darmstadt a joint seminar with interesting talks in the field of CE is organized in every semester. If you are interested in these seminars and would like to receive invitations please subscribe for the corresponding mailing list.
Multigrid methods for structured grids on large-scale supercomputers
Prof. Dr. Matthias Bolten, Bergische Universität Wuppertal
2 Dec 2019, 16:15–17:45; Location: S2|17-103
In many applications in computational science and engineering the solution of a partial differential equation is sought for, often these applications demand a huge amount of compute power or memory, thus requiring the use of supercomputers.
For many problems multigrid methods are optimal solvers. By optimality we mean that the convergence rate is bounded from above independently from the system size and that the number of arithmetic operations grows linear with the system size. Multigrid methods have been developed especially for the solution of linear systems that arise when partial differential equations are discretized. They rely on a grid hierarchy that is available naturally when structured grids are used. If this is not the case, other, more expensive techniques like algebraic multigrid have to be used. Besides allowing for the use of computationally cheaper geometric multigrid methods, the presence of structure also enables to use more efficient implementations on modern computer architectures, including GPUs or vector units in general.
We work on highly scalable multigrid methods on high performance computers and accelerators. This includes the design and analysis of coarse grid and grid transfer operators, as well as the development of smoothers with a special emphasize on scalability. E.g., block smoothers that posses a higher arithmetic complexity and better smoothing properties, resulting in shorter time to solution. Additionally, for time-dependent problems parallelization in time is employed.
In the talk our work on block smoothers and analysis techniques will be presented. Further, results on different parallel architectures will be shown, including the solution of parabolic PDEs using parallelization in time.
Multilevel Monte Carlo Methods for the Robust Optimization of Systems Described by Partial Differential Equations
Prof. Stefan Vandewalle, PhD, KU Leuven, Belgium
18 Oct 2019, 13:30–15:00; Location: S2|02-C110
We consider PDE-constrained optimization problems, where the partial differential equation has uncertain coefficients modelled by means of random variables or random fields. The goal of the optimization is to determine an optimum that is satisfactory in a broad parameter range, and as insensitive as possible to parameter uncertainties. First, an overview is given of different deterministic goal functions which achieve the above aim with a varying degree of robustness. Next, a multilevel Monte Carlo method is presented which allows the efficient calculation of the gradient and the Hessian arising in the optimization method . The convergence and computational complexity for different gradient and Hessian based optimization methods is then illustrated for a model elliptic diffusion problem with lognormal diffusion coefficient . We also explain how the optimization algorithm can benefit from taking optimization steps at different levels of the multilevel hierarchy, in a classical MG/OPT framework .
We demonstrate the efficiency of the algorithm, in particular for a large number of optimization variables and a large number of uncertainties.
 M. B. Giles, Multilevel Monte Carlo Methods. Acta Numerica, 24, pp. 259–328, 2015.
 A. Van Barel, S. Vandewalle. Robust Optimization of PDEs with Random Coefficients Using a Multilevel Monte Carlo Method. SIAM/ASA Journal on Uncertainty Quantification 7 (1), pp. 174-202, 2019.
 S.G. Nash. A Multigrid Approach to Discretized Optimization Problems. Optimization Methods and Software, 14 (1-2), pp. 99–116, 2000.
Towards Flexible Antenna Measurements and Field Transformations in Arbitrary Environments
Prof. Dr. Thomas Eibert, Technical University of Munich, Germany
26 Sep 2019, 14:00–15:30; Location: S2|17-103
Due to the continuously increasing use of electromagnetic services for communications and sensor functionalities, the accurate and reliable characterization of antennas by measurements becomes increasingly important. Traditionally, antenna measurements have been performed in very specialized measurement chambers, which are very expensive and not very flexible in use. The antennas must be brought into the chamber and the measurements must be performed with great care. Due to reduced size requirements for the chamber, near-field measurements with subsequent near-field far-field transformations have become standard over the past years. A particular requirement of near-field measurements is the need to measure amplitude and phase in very many measurement locations, in the ideal case on a closed surface around the test object, where phase coherence must be maintained among all measurement values. Classical near-field far-field transformation approaches were also designed for very specialized and inflexible measurement configurations, such as for spherical measurements with equidistant sampling or for measurement planes with equidistant sampling. In recent years, more flexible near-field far-field transformation approaches have been established which allow for much more flexibility and which give more insight into the radiation mechanisms of the test antennas at the same time. With such novel transformation capabilities, completely new measurement scenarios can be thought of, where it seems possible that we have very flexible and portable measurement solutions in a couple of years, which “can come” to the antenna, where ever it is, and not vice versa.
Starting from basic considerations of antenna measurements, the presentation will introduce a very flexible and powerful near-field far-field transformation approach, which is able to transform measured fields in arbitrary locations and measured with more or less arbitrary probes. Based on these considerations, the capabilities of this approach will be demonstrated for a variety of near-field measurements, where far-field results and diagnostic capabilities will be discussed. Due to their increasing importance, measurement scenarios for automobiles will be considered, where the automobile is e.g. located on a metallic ground plane. Since the measurement of coherent phases can be problematic in many applications, the possibility of phaseless measurements with subsequent near-field far-field transformation will be considered and approaches towards near-field measurements and transformations in fully reflective environments will also be discussed. The presentation will close by looking into concepts of drone based near-field measurements and transformations.
Thomas F. Eibert received the Dipl.-Ing. (FH) degree in electrical engineering from Fachhochschule Nürnberg, Nuremberg, Germany, the Dipl.-Ing. degree in electrical engineering from Ruhr-Universität Bochum, Bochum, Germany, and the Dr.-Ing. degree in electrical engineering from Bergische Universität Wuppertal, Wuppertal, Germany, in 1989, 1992, and 1997, respectively. From 1997 to 1998, he was with the Radiation Laboratory, Electrical Engineering and Computer Science Department, University of Michigan, Ann Arbor, MI, USA. From 1998 to 2002, he was with Deutsche Telekom, Darmstadt, Germany. From 2002 to 2005, he was with the Institute for High-Frequency Physics and Radar Techniques of FGAN e.V., Wachtberg, Germany, where he was the Head of the Department of Antennas and Scattering. From 2005 to 2008, he was a Professor of Radio Frequency Technology with the Universität Stuttgart, Stuttgart, Germany. Since 2008, he has been a Professor of High-Frequency Engineering with the Technical University of Munich, Munich, Germany. His current research interests include numerical electromagnetics, wave propagation, measurement and field transformation techniques for antennas and scattering, and all kinds of antenna and microwave circuit technologies for sensors and communications.
On high Reynolds number flows, pressure-robustness and high-order methods
Dr. Alexander Linke, WIAS, Berlin
28 Aug 2019, 16:15–17:45; Location: S4|10-1
An improved understanding of the divergence-free constraint for the incompressible Navier-Stokes equations leads to the observation that a semi-norm and corresponding equivalence classes of forces are fundamental for their nonlinear dynamics. The recent concept of pressure-robustness allows to distinguish between space discretisations that discretise these equivalence classes appropriately or not. This contribution compares the accuracy of pressure-robust and non-pressure-robust space discretisations for transient high Reynolds number flows, starting from the observation that in generalised Beltrami flows the nonlinear convection term is balanced by a strong pressure gradient. Then, pressure-robust methods are shown to outperform comparable non-pressure-robust space discretisations. Indeed, pressure-robust methods of formal order kare comparably accurate than non-pressure-robust methods of formal order 2kon coarse meshes. Investigating the material derivative of incompressible Euler flows, it is conjectured that strong pressure gradients are typical for non-trivial high Reynolds number flows. Connections to vortex-dominated flows are established. Thus, pressure-robustness appears to be a prerequisite for accurate incompressible flow solvers at high Reynolds numbers. The arguments are supported by numerical analysis and numerical experiments.
Are your scientific applications ready for evolving HPC systems?
Prof. Sunita Chandrasekaran, Ph.D., University of Delaware, Newark, USA
25 Jun 2019, 15:30–17:00; Location: S4|10-1
This talk will present interdisciplinary research that entails the applicability of computer science tools and techniques on real-world scientific applications spanning nuclear physics, molecular dynamics, next-generation sequencing and magnetohydrodynamics (the study of the Sun). As architectures evolve, it is only becoming increasingly challenging for such real-world scientific applications to exploit the rich resources of these complex hardware architectures. To that end, this talk will present the above case studies and share solutions that include using directive-based programming models, compiler and runtime strategies in order to migrate these applications to high-performance computing nodes.
Sunita Chandrasekaran is currently an Assistant Professor at the University of Delaware in the Dept. of Computer & Information Sciences. She is also affiliated with the Data Science Institute at UD. Her area of research is quite interdisciplinary and spans High-Performance Computing, Computer Architecture, Parallel Programming. She received the 2016 IEEE-CS TCHPC Award for Excellence for Early Career Researchers in High-Performance Computing. She hosts parallel computing workshops co-located with SC, IPDPS and ISC, serves on numerous technical program committees and has chaired technical tracks at SC, PASC, IPDPS, CCGrid, and ISC.
Isogeometric Design Optimization of Nonlinear 3D Beam Structures for Multi-material 3D Printing
Prof. Dr. Oliver Weeger, TU Darmstadt
18 Jun 2019, 17:00–18:30; Location: S4|10-1
With the capability to locally control the material composition of a structure, multi-material and multi-method 3D printing technologies provide a new level of design freedom beyond the sole realization of complex topologies. However, the precise design and optimization of spatially varying material compositions within a structure is beyond the capabilities of traditional computer-aided design approaches and tools. In this work, we apply the concept of isogeometric design and analysis to efficiently model, simulate and optimize spatially varying material compositions in the context of multi-material additive manufacturing.
To efficiently model and simulate 3D beam structures, we introduce an isogeometric collocation method for the geometrically exact 3D beam model. In addition to discretizing the kinematic variables using NURBS curves, we also parameterize geometric and material design parameters, which enables design optimization of the shape, cross-section geometry and material composition.
In particular, we apply this concept to nonlinear 3D beam structures with axially and transversally varying geometric and material parameters, including non-homogeneous, functionally graded and laminate cross-sections. We demonstrate the applicability of the approach for design optimization of multi-material 3D printed, active rod structures with axially varying material distributions, direct 4D printing of self-assembling, multi-material laminate structures, and soft, nonlinear lattice structures.
From Amdahl’s Law to Memory-Wall: How do we utilize memory systems in the big data era?
Prof. Dr. Xian-He Sun, Illinois Institute of Technology, Chicago, USA
14 Jun 2019, 15:30–17:00; Location: S4|10-1
Amdahl’s law states that parallel processing gain will diminish quickly if problem size does not increase with the computing power. Memory-wall problem clams that data access is the performance bottleneck for data intensive applications; whereas big data applications are all data-centric applications. Data access becomes THE performance concern of computing. In this talk, we first give a short review of the concept of Amdahl’s law and memory-wall, we then introduce a new thought on memory system design based on concurrent data access. We present the Concurrent-AMAT (C-AMAT) data access model to quantify the unified impact of data locality, concurrency and overlapping (latency hiding) and introduce the pace-matching data-transfer design methodology, to utilize memory system performance. A global management system, named Layered Performance Matching (LPM), is then developed to optimize the overall performance of memory systems. C-AMAT shows that data access concurrency is as important as data access locality, but its main contribution is on latency hiding, not on bandwidth increase, which is harmfully underutilized in current system design. Experimental testing confirms our theoretical findings, with a 150x reduction of memory stall time.
Dr. Xian-He Sun is a University Distinguished Professor of Computer Science at the Department of Computer Science in the Illinois Institute of Technology (IIT). He is the director of the Scalable Computing Software laboratory at IIT and a guest faculty in the Mathematics and Computer Science Division at the Argonne National Laboratory. Before joining IIT, he worked at DoE Ames National Laboratory, at ICASE, NASA Langley Research Center, at Louisiana State University, Baton Rouge, and was an ASEE fellow at Navy Research Laboratories. Dr. Sun is an IEEE fellow and is known for his memory-bounded speedup model, also called Sun-Ni’s Law, for scalable computing. His research interests include data-intensive high-performance computing, memory and I/O systems, software system for big data applications, and performance evaluation and optimization. He has over 250 publications and 6 patents in these areas. He is the Associate Editor-in-Chief of the IEEE Transactions on Parallel and Distributed Systems, a Golden Core member of the IEEE CS society, a former vice chair of the IEEE Technical Committee on Scalable Computing, the past chair of the Computer Science Department at IIT, and is serving and served on the editorial board of leading professional journals in the field of parallel processing. More information about Dr. Sun can be found at his web site www.cs.iit.edu/~sun/.
Similar Size of Slum – An Eigenvalue independent of City, Country and Culture
Prof. Dr.-Ing. Peter Pelz, TU Darmstadt
6 Jun 2019, 17:00–18:30; Location: S4|10-1
In a joint project with DLR, we recently demonstrated that the size of a slum is independent of city, country and culture. The question arises as to why this is so. Natural borders such as coastlines, mountains, rivers or motorways do not serve as reasons. Size must be inherent in migration dynamics.
We see migration as a reaction diffusion system by distinguishing between “diffuse” short-distance migration of the poor and the rich and “reactive” long-distance migration. This model of migration dynamics can Turing unstable. The similar size of slums is an intrinsic value, i.e. an intrinsic size based on the migration behaviour of poor and rich people.
If we continue to think of the infrastructure system for the global South, we must think in natural terms of the migration system. The first and smallest scale is a human being. The second scale is a family. Normally this scale is in the order of the Miller number seven, the third scale is the mentioned intrinsic scale of migration dynamics.
If we plan the infrastructure politically, technically and economically, we must keep an eye on all three dimensions.
This research is joint work with John Friesen, Lea Rausch and Jakob Hartig.
Peter F. Pelz, John Friesen, and Jakob Hartig: Similar size of slums caused by a Turing instability of migration behavio; Phys. Rev. E 99, 022302; 2019.
Friesen, J.; Taubenböck, H.; Wurm, M.; Pelz, P.F.: Size distributions of slums across the globe using different data and classification methods; European Journal of Remote Sensing; 2019
Friesen, J.; Taubenböck, H.; Wurm, M.; Pelz, P.F.: The similar size of slums; Habitat International, 2018.
Linearization of the nonlinear eigenvalue problem
Prof. Dr. Karl Meerbergen, KU Leuven
27 May 2019, 16:15–17:45; Location: S2|17-103
Everybody is familiar with the concept of eigenvalues of an n times n matrix. In this talk, we consider the nonlinear eigenvalue problem. These are problems for which the eigenvalue parameter appears in a nonlinear way in the equation. The last decade, the number of applications is increasing. In physics, the Schroedinger equation for determining the bound states in a semiconductor device, introduces terms with square roots of different shifts of the eigenvalue. In mechanical and civil engineering, new materials often have nonlinear damping properties. For the vibration analysis of such materials, this leads to nasty functions of eigenvalue in the system matrix.
One particular example is the sandwhich beam problem, where a layer of damping material is sandwhiched between two layers of steel. Another example is the stability analysis of the noise produced by burners in a combustion chamber. The burners lead to a boundary condition with delay terms (exponentials of the eigenvalue).
We often receive the question: “How can we solve a nonlinear eigenvalue problem?” This talk explains the different steps to be taken for using Krylov methods. The general approach works as follows: 1) approximate the nonlinearity by a rational function; 2) rewrite this rational eigenvalue problem as a linear eigenvalue problem and then 3) solve this by a Krylov method. We explain each of the three steps in detail.
For step one, we explain the tools from approximation theory that can be used: spectral approximation, potential theory, Adaptive Antoulas-Anderson, Padé approximation. For steps 2 and 3, we explain which bases of rational polynomials should be used for efficient application of Krylov methods. Numerical examples illustrate the different choices.
MSO for steel production and manufacturing
Prof. Dr. Dietmar Hömberg, TU Berlin
16 May 2019, 16:15–17:45; Location: S2|17-103
In my presentation I will discuss some results from the European Industrial Doctorate project "MIMESIS – Mathematics and Materials Science for steel production and manufacturing”. The last fifteen years have seen the development of ever more refined high-strength and multiphase steels with purpose designed chemical compositions allowing for significant weight reduction, e.g., in automotive industry. The production of these modern steel grades needs a precise process control, since there is only a narrow process window available in which the desired physical properties are defined. In combination with component walls getting thinner and thinner these new steels make also new demands on a more precise process control in metal manufacturing processes, such as welding and hardening.
In my presentation I will focus on three case studies highlighting MSO strategies related to induction heating applications,
- multi-frequency induction hardening
- high-frequency induction tube welding
- flame cutting of steel plates.
Unconventional numerical frameworks for the simulation of coupled bulk-interface problems
Prof. Dr. Luca Heltai, SISSA – International School for Advanced Studies, Trieste
6 May 2019, 16:15–17:45; Location: S2|17-103
Fluid-structure interaction problems, interface problems, and partial differential equations with interfaces and/or defects often require the solution of coupled bulk-interface problems. In this talk, I will discuss and analyse some of the techniques that can be used to tackle this class of problems, combining Finite Element Methods, Boundary Element Methods, Isogeometric Analysis, and Immersed Boundary Methods.
Large-Scale Sparse Inverse Covariance Matrix Estimation
Prof. Dr. Matthias Bollhöfer, TU Braunschweig
3 Apr 2019, 10:00–11:30; Location: S4|10-1
The estimation of large sparse inverse covariance matrices is an ubiquitous statistical problem in many application areas such as mathematical finance or geology or many others. Numerical approaches typically rely on the maximum likelihood estimation or its negative log-likelihood function. When the Gaussian mean random field is expected to be sparse, regularization techniques which add a sparsity prior have become popular to address this issue. Recently a quadratic approximate inverse covariance method (QUIC)  has been proposed. The hallmark of this method is its superlinear to quadratic convergence which makes this algorithm to be among the most competitive methods. In this talk we present a sparse version (SQUIC)  of this method and we will demonstrate that using advanced sparse matrix technology the sparse version of QUIC is easily able to deal with problems of size one million within a few minutes on modern multicore computers.
 C.J. Hsieh, M.A. Sustik, I.S. Dhillon, and P.K. Ravikumar. Sparse inverse covariance matrix estimation using quadratic approximation, in Advances in Neural Information Processing Systems, J. Shawe-Taylor, R. Zemel, P. Bartlett, F. Pereira, and K. Weinberger, eds., vol. 24, Neural Information Processing Systems Foundation, 2011, pp. 2330-2338.
 M. Bollhoefer, A. Eftekhari, S. Scheidegger, and O. Schenk. Large-Scale Sparse Inverse Covariance Matrix Estimation. SIAM J. Sci. Comput., 41(1), A380-A401, 2019.
High Performance Block Incomplete LU Factorization
Prof. Dr. Matthias Bollhöfer, TU Braunschweig
1 Apr 2019, 16:00–17:30; Location: S4|10-1
Many application problems that lead to solving linear systems make use of preconditioned Krylov subspace solvers to compute their solution. Among the most popular preconditioning approaches are incomplete factorization methods either as single-level approaches or within a multilevel framework. We will present a block incomplete triangular factorization that is based on skilfully blocking the system initially and throughout the factorization. This approach allows for the use of cache-optimized dense matrix kernels such level-3 BLAS or LAPACK. We will demonstrate how this block approach may signifcantly outperform the scalar method on modern architectures, paving the way for its prospective use inside various multilevel incomplete factorization approaches or other applications where the core part relies on an incomplete factorization.
Continuous and Discontinuous Galerkin methods in fluid dynamics with moving geometries
Prof. Dr. Sabine Roller, University of Siegen
5 Feb 2019, 17:00–18:30; Location: S4|10-1 (lecture room Dolivostraße)
Interaction between structures and the flow around them requires a correct treatment of both of them. If the computational requirements are high, the methods need to be efficient on modern supercomputers that offer a high number of compute nodes and cores. In structural mechanics, often Finite Element (FE) methods are used, while fluid dynamics (especially for compressible flows) often apply Finite Volume (FV) methods. The best of two worlds is obtained with Discontinuous Galerkin (DG) methods, which are highly appropriate in regions with discontinuous solutions, but also highly accurate in regions with smooth solutions. The variation of h and p (mesh size and order of the polynomial) gives additional freedom to adopt to modern supercomputers (h- and p-adaptation). Nevertheless, thinking about the equations, an additional parameter for adaptation is available (e-adaptation). Also adaptation in time (t-adaptation) is relevant, which needs to go beyond the classical time step adaption. Thinking of the next generation of HPC, the efficient usage of those extremely large systems requires even parallelization in time. At this point, Continuous Galerkin (CG) methods come back into play again. This presentation will introduce the interplay of application and numerical method (quality of the solution) as well as the interplay of numerical method and suitability for highly scalable compute systems (co-design), and show some examples of flow around moving geometries (represented as an immersed boundary).
Particle Methods in Bounded Domains
Dr. Matthias Kirchhart, RWTH Aachen
17 Jan 2019, 17:00–18:30; Location: S4|10-314 (seminar room Dolivostraße)
Particle methods like vortex methods or smoothed particle hydrodynamics are numerical schemes that are ideally suited for convection dominated flow problems. Unlike other mesh-based flow solvers, these methods do not suffer from numerical diffusion and have excellent conservation properties. One of the reasons why particle methods are so rarely used in practice today are their difficulties to accurately handle boundary conditions. In this talk we will first give a brief introduction to particle methods and try to illustrate their benefits. We then discuss some of the problems they are facing, with a focus on boundaries. We introduce and describe a new approach to the solution of one of these problems: particle regularisation.
Uncertainty quantification for partial differential equations on random domains
Prof. Dr. Michael Multerer, ICS Institute of Computational Science, Lugano
14 Jan 2019, 16:15–17:45; Location: S2|17-103 (TEMF, Schlossgartenstraße)
The numerical simulation of physical phenomena is very well understood given that the input data are given exactly. However, in practice, the collection of these data is usually subjected to measurement errors. The goal of uncertainty quantification is to assess those errors and their possible impact on simulation results. In this talk, we address different numerical aspects of uncertainty quantification in elliptic partial differential equations on random domains. Starting from the modeling of random domains via random vector fields, we discuss how the corresponding Karhunen-Loeve expansion can efficiently be computed. Moreover, we provide a means to rigorously control the approximation error. Considering Electrical Impedance Tomography as an example, we show how measurement data can be incorporated into the model by means of Bayesian inversion. We provide Numerical results to illustrate the presented approach.
A systems guy's view of quantum computing
Prof. Dr. Torsten Hoefler, ETH Zürich
8 Jan 2019, 17:00–18:30; Location: S2|02-C110 (Piloty building)
I will provide an introduction to the general concepts of quantum computation and a brief discussion of its strengths and weaknesses from a high-performance computing perspective. The talk is tailored to a computer science audience with basic (popular-science) or no background in quantum mechanics and will focus on the computational aspects. I will also discuss systems aspects of quantum computers and how to map quantum algorithms to their high-level architecture. I will close with principles of practical implementation of quantum computers.
Torsten Hoefler is an Associate Professor of Computer Science at ETH Zürich, Switzerland. Before joining ETH, he led the performance modeling and simulation efforts of parallel petascale applications for the NSF-funded Blue Waters project at NCSA/UIUC. He is also a key member of the Message Passing Interface (MPI) Forum, where he chairs the “Collective Operations and Topologies” working group. Torsten won best paper awards at the ACM/IEEE Supercomputing Conference SC10, SC13, SC14, EuroMPI'13, HPDC'15, HPDC'16, IPDPS'15, and other conferences. He published numerous peer-reviewed scientific conference and journal articles and authored chapters of the MPI-2.2 and MPI-3.0 standards. He received the Latsis prize of ETH Zurich as well as an ERC starting grant in 2015. His research interests revolve around the central topic of “Performance centric System Design” and include scalable networks, parallel programming techniques, and performance modeling. For additional information, please visit Torsten's homepage at htor.inf.ethz.ch.