# CE Seminar

Together with the Computational Engineering Research Center of the TU Darmstadt a joint seminar with interesting talks in the field of CE is organized in every semester. If you are interested in these seminars and would like to receive invitations please subscribe for the corresponding mailing list.

## 2014

## Towards Non-Smooth Efficient Modeling of Large Data

### Prof. Hamid Krim, North Carolina State University, Raleigh (U.S.A.)

**2 Dec 2014, 14:30–15:30; Location: S3|06-257**

High dimensional data exhibit distinct properties compared to its low dimensional counterpart; this causes a common performance decrease and a formidable computational cost increase of traditional approaches. Novel methodologies are therefore needed to characterize data in high dimensional spaces.

Considering the parsimonious degrees of freedom of high dimensional data compared to its dimensionality, we study the union-of-subspaces (UoS) model, as a generalization of the linear subspace model. The UoS model preserves the simplicity of the linear subspace model, and enjoys the additional ability to address nonlinear data. We show a sufficient condition to use l1 minimization to reveal the underlying UoS structure, and further propose a bi-sparsity model (RoSure) as an effective algorithm, to recover the given data characterized by the UoS model from errors/corruptions. This framework shows superior performance for a wide range of problems, such as face clustering and video segmentation.

## The Use of Nonlinear Structures for Augmented Performance in Smart Material Systems

### Andres F. Arrieta, PhD, ETH Zurich

**2 Dec 2014, 14:00–15:30; Location: S4|10-414 **

Physical phenomena in most engineering fields and applications are inherently nonlinear. Yet, most established engineering design tools have been developed for idealised linear systems. In particular, structural nonlinearity in engineering design has been traditionally associated with problems and loss of functionality, often at the expense of overdesign and suboptimal performance. The positive exploitation of geometrical nonlinearity can allow to obtain extreme mechanical properties in structural systems. The use of such novel properties are used for designing structural systems with unconventional capacities. This concept is demonstrated through the design of multi-stable elements embedded within larger systems to augment and create novel behaviour ultimately leading to enhanced performance. In particular, multi-stability is utilised for designing high performance smart material systems. Applications are presented in the fields of morphing structures and energy harvesting for autonomous powering of microelectronic devices.

## Electromagnetic fields, Lorentz force effects and fast current surges in microelectronic protection devices

### Dr. Wim Schoenmaker, Magwel NV (Leuven, Belgium)

**1 Dec 2014, 16:15–17:45; Location: S2|17-103 **

Electrostatic discharge (ESD) is a serious design concern in microelectronic device fabrication and usage. Current spikes of the order of amperes active over a time lap of a nanosec need to be screened off from the core circuits by ESD protection devices. On-chip silicon controlled rectifiers are used for this purpose. The fast transient effect lead to appreciable induced magnetic fields and a detailed understanding of the temporal response requires that the induced magnetic fields are included. In this talk we will discuss the transient electromagnetic approach to semiconductor device engineering and for completeness also discuss a numerical implementation of the self-induced Lorentz force effects. Although the latter have a small impact on the final results, the computation of these effects is very demanding. In particular, the Newton-Raphson scheme shows peculiar convergence behavior which is not fully understood so far.

## Equivalent polynomial quadrature for XFEM – like enriched formulations

### Prof. Giulio Ventura, Politecnico di Torino, Italy

**30 Oct 2014, 17:00–18:30; Location: S4|10-1**

The powerful properties of formulations enriched with discontinuous or singular functions like XFEM, GFEM and SDA is the ability to represent features independently of the mesh structure. However, a problematic point in their practical implementation is given by the quadrature to compute the element stiffness. In fact, especially in 3D, the element are cut at arbitrary planes by the discontinuities, with the consequence that standard quadrature rules cannot be applied.

The method of Equivalent Polynomials maps enrichment functions onto polynomials having the property of reproducing the exact element stiffness. This method and its recent findings will be illustrated for Heaviside function enriched elements in 1D, 2D and 3D.

## Exploiting Sparsity in Derivative Computations

### Prof. Dr. H. Martin Bücker, Friedrich Schiller University Jena

**28 Oct 2014, 17:00–18:30; Location: S4|10-1**

Derivatives of mathematical functions are needed in various areas of computational engineering. Examples include the solution of nonlinear systems of equations and inverse problems. When the function is given in the form of a computer program, automatic differentiation (AD) enters the picture. In this set of powerful techniques, derivatives are evaluated accurately rather than approximately by divided differences. A common misconception is that AD is not capable of exploiting sparsity of Jacobian or Hessian matrices. However, there is a rich set of AD techniques based on modeling derivative computations by means of coloring various types of graphs. This talk will give an introduction to these techniques and will also present some recent results.

## Block preconditioners for problems in magma dynamics

### Dr. Garth Wells, Cambridge University

**25 Sep 2014, 16:30–18:00; Location: S4|10-1 **

Simulations of realistic Earth science problems are characterised by massive scale and coupled sets of equations. In the case of problems such as subduction zones, geometric effects are central. These features demand numerical methods and software libraries that are flexible and high performance, and the deployment of scalable linear solvers. In working towards the simulation of realistic subduction zones, I will present work on the development on scalable block preconditioners for coupled mantle/magma flow. The preconditioners are proven, through analysis, to be optimal with respect to problem size. Devising methods that are uniform with respect to model parameters and are effective across the necessarily wide range of physical regimes is challenging. A number of numerical examples will be presented, together with an overview of the software tools used to construct preconditioners at a high level. Finally some open issues will be discussed, a number of which are common to preconditioners for H(div) and H(curl) problems.

This is joint work with Sander Rhebergen, Richard Katz and Andy Wathen.

## On Stability of Stationary Points in MPCCs

### Michal Červinka, Ph.D., Charles University, Prague

**1 Sep 2014, 16:15–18:00; Location: S4|10-1**

We consider parameterized mathematical programs with constraints governed by the generalized nonlinear complementarity problem and with additional non-equilibrial constraints. We study a local behavior of stationarity maps that assign the respective C- or M-stationarity points of the problem to the parameter. We provide criteria for the isolated calmness and the Aubin properties of stationarity maps considered. To this end, we derive formulas for some particular objects of the third-order variational analysis, e.g. coderivative of coderivative of the normal cone mapping to nonnegative reals.

Coauthors: Jiri Outrata and Miroslav Pistek

## Reduced Basis Methods for Parametric Problems

### Jun.-Prof. Bernard Haasdonk, University of Stuttgart

**28 Jul 2014, 16:15–17:45; Location: S2|17-103 **

In this presentation we will address different aspects concerning Reduced Basis (RB) methods for parametric partial differential equations (PDEs). This class of model reduction techniques enables rapid solution of parametric problems in the real-time or many-query context and has gained quite some interest and wide development over the last decade After presenting the fundamentals for stationary elliptic problems (affine parametric assumption, offline-online decomposition, greedy sampling, rigorous certification by efficient a-posteriori error bounds), we will put an emphasis on basis generation, including also instationary problems (Greedy and POD-Greedy procedures). Despite their sampling-based nature, these methods can be proven to be quasi-optimal in a rigorous approximation theoretic sense. For complex problems, also adaptive extensions can be devised. Some sample applications include nonlinear transport problems, multiscale settings or parameter optimization scenarios.

## Accelerated Observers in Electrodynamics – What can they do for you?

**Prof. Stefan Kurz, Robert Bosch GmbH and Tampere University of Technology**

**21 Jul 2014, 16:15–17:45; Location: S2|17-103 **

We introduce a relativistic splitting structure as a means to map fields and equations of electromagnetism from curved four-dimensional spacetime to three-dimensional observer's space. We focus on a minimal set of mathematical structures that are directly motivated by the language of the physical theory. Space-time, world-lines, time translation, space platforms, and time synchronization all find their mathematical counterparts. The splitting structure is defined without recourse to coordinates or frames. This is noteworthy since, in much of the prevalent literature, observers are identified with adapted coordinates and frames. Among the benefits of the approach is a concise and insightful classification of splitting structures that is juxtaposed to a classification of observers. The application of the framework to the Ehrenfest paradox and Schiff's “Question in General Relativity” further illustrates the advantages of the framework, enabling a compact, yet profound analysis of the problems at hand.

## Exploitation of Structure and Parallelism in ADOL-C

### Prof. Andrea Walther, University of Paderborn

**16 Jul 2014, 17:00–18:30; Location: S4|10-314**

For numerous applications the exploitation of structure and/or parallelism is indispensable for the efficient computation of derivatives and also for the design of new solution approaches. We discuss recent developments integrated in the software package ADOL-C for the algorithmic differentiation of C and C++ codes to utilize such additional information of the function to be

differentiated. One aspect is the detection of nondifferentiabilities to build from that an adapted optimizer for nonsmooth problems. Another aspect is the differentiation of already parallelized code. Numerical examples illustrate the benefits resulting from such structure exploitation with respect to an extended applicability but also concerning computing time.

## Domain decomposition techniques for highfrequency harmonic wave problems

### Prof. Christophe Geuzaine, University of Liège

**14 Jul 2014, 16:15–17:45; Location: S2|17-103 **

## Fast Moment Method for Beam Line Simulations

### Dr. Toon Roggen, CERN

**7 Jul 2014, 16:15–17:45; Location: S2|17-103 **

Particle accelerators are among the most complicated contemporary devices and consist of a large number of components, each with a specific function. All components need to be tuned with respect to each other to achieve the prescribed particle beam characteristics. Seemingly small deviations from an individual component's design specifications may induce irreversible aberration from the particle beam characteristics in subsequent accelerator components. Therefore both fast and accurate beam dynamics simulations of the accelerator as a whole are indispensable during design and operation of a particle accelerator. V-Code is a beam dynamics simulation code based on the moment method and the Vlasov equation, and has the ability to take into account electromagnetic field distributions obtained from Finite Element (FE) and Finite Difference Time Domain (FDTD) simulations. These surrogate field models improve the beam dynamics model set-up significantly. For extraction of accurate and reproducible surrogate field models from the 3D electromagnetic field simulation results, standardised procedures are developed for radio frequency (RF) cavities, solenoids, steerer magnets, Wien filters, dipole magnets, quadrupole magnets, sextupole magnets and octupole magnets.

Furthermore V-Code, initially being developed for electrons, is extended for the more general case of particles with an arbitrary charge and mass. From 3D electrostatic field simulation results of Radio Frequency Quadrupoles (RFQs) surrogate field models dedicated to the RFQ's radial matcher cells, transition cells and cells for particle bunching, focussing and acceleration are derived. Their accuracy and robustness is validated thoroughly, both with theoretical and realistic RFQ models. The four-rod RFQ design for the 600 MeV proton accelerator as a segment of the MYRRHA research reactor, planned at SCKCEN, was employed as a realistic RFQ validation model. The surrogate field models are implemented into the Vlasov solver. The extended Vlasov solver combined with the accurate surrogate field models calculates the beam dynamics in an RFQ in a few seconds, making the solver a valuable tool for the design and operation of RFQs.

## The Power of Trefftz Methods: Difference Schemes, Absorbing Conditions and Metamaterials

### Prof. Igor Tsukerman, The University of Akron, U.S.A.

**4 Jul 2014, 11:00–12:30; Location: S4|10-314**

In mathematical physics and engineering, Trefftz approximations by definition involve functions that satisfy the underlying differential equation of the problem (as well as the interface boundary conditions). Examples include harmonic polynomials for the Laplace equation; plane waves, cylindrical or spherical harmonics for wave problems; exponential functions for boundary layers, and so on. Trefftz approximations are well established in the context of pseudo-spectral and domain decomposition methods, but this talk calls attention to applications that are known less well or are entirely new:

Finite difference-Trefftz schemes of arbitrarily high order. They are obtained by replacing the classical Taylor expansions with local Trefftz approximations.

Numerical and analytical absorbing boundary conditions based on Trefftz approximations.

Boundary-difference Trefftz methods that are analogous to boundary integral equations but are completely singularity-free.

Homogenization of electromagnetic and photonic metamaterials: a two-scale theory involving Trefftz approximations on both coarse and fine levels. This explains, in particular, “optical magnets” – artificial magnetism at high frequencies.

This discussion of the versatility and power of Trefftz methods is intended to stimulate their application in many other areas of applied science and engineering.

## A multigrid approach to optimal control of obstacle problems

### Dr. Michelle Vallejos, TU Chemnitz

**3 Jul 2014, 17:00–18:30; Location: S4|10-1 **

An optimal control problem governed by an elliptic variational inequality is considered. We focus on the optimal control of the obstacle problem, which is a prototypical example of variational inequalities of the first kind.

A robust multigrid strategy for solving obstacle problems will be presented. This algorithm is then extended in order to apply the same strategy for solving optimal control of obstacle problems. A collective smoothing multigrid is utilized since it belongs to the family of multigrid strategies which perform well in solving optimal control problems with PDE constraints. The algorithmic concept will be discussed and numerical examples will be presented to illustrate the efficiency of the proposed methods.

## High-performance Computing for Flows in Porous Media

### Prof. Peter Bastian, Ruprecht-Karls-Universität Heidelberg, Interdisziplinäres Zentrum für Wissenschaftliches Rechnen

**26 Jun 2014, 16:30–18:00; Location: S4|10-1 **

Simulation of flow and transport processes in porous media provides a formidable challenge and application field for high-performance computing. Relevant continuum-scale models include partial differential equations of elliptic, parabolic and hyperbolic type which are coupled through highly nonlinear coefficient functions. The multi-scale character and uncertainties in the parameters constitute an additional level of complexity but provide also opportunities for high-performance computing.

This talk will focus on the efficient solution of incompressible two-phase flow with a fully-coupled discontinuous Galerkin (DG) based approach that is comparable in efficiency (measured in accuracy per computation time) to simple cell-centered schemes but offers the opportunity to increase arithmetic intensity substantially in the assemble stage as well as the solve phase. For the fast solution of the arising linear systems a hybrid preconditioner based on subspace correction in the conforming finite element subspace is employed.

In a second example high-order DG methods for density driven flow are investigated with emphasis on scalability w.r.t. the polynomial degree and number of threads on shared memory machines.

## Automation of Industrial Design Processes by Using Optimization and Process Integration

### Prof. Dr. Dieter Bestle, Brandenburgische Technische Universität Cottbus

**22 May 2014, 15:30–17:00; Location: S4|10-314 **

In industrial product development, design decisions are mostly based on intensive parameter studies combined with expert knowledge, where in case of complex technical systems like aero engines typically several experts from different departments have to be involved. However, typically each department uses its own analysis programs with its own data standard resulting in time-consuming data transfer, and even if each discipline finds its individual optimum, the overall design may not exploit the full optimization potential. Meanwhile availability of high computer power allows to integrate such design tasks in overall optimization processes where key drivers are optimization-oriented design parameterization, expert-driven problem formulation and use of multi-criterion optimization concepts to rise acceptance in industry, and use of response surfaces to account for high analysis effort. Various examples from aero engine design will demonstrate applicability and remaining challenges of such an approach.

## Applications in Computational Design & Analysis

### Dr. Jason Wu, The Boeing Company, Seattle, U.S.A.

**20 May 2014, 17:00–18:30; Location: S4|10-1**

Wavy plies or wrinkles in carbon fiber reinforced plastic (CFRP) composite structures can result in performance knockdowns. The performance impact of the ply distortion can depend on many factors, but two commonly desired parameters are the wrinkle height and width. An ultrasonic B-scan of the wrinkle can reveal the ply distortion; however, visual interpretation of the B-scan can vary by person. An automated method to analyze the B-scan for the ply distortions is desirable to provide consistency of measurement. This talk will present two algorithms that have been implemented to process the B-scans for measuring the height and width of a composite wrinkle. Before the technical presentation, I will share my industrial working experiences as a specialist in computational science. If time permits, I will touch on some work related to text mining in the end.

## Innovative compatible discretization techniques for Partial Differential Equations

### Prof. Annalisa Buffa, Institute of Applied Mathematics and Information Technology, Pavia

**19 May 2014, 10:00–11:30; Location: S4|10-1**

## Coupling of Finite and Boundary element Methods: Applications and Recent Developments

### Dr. Günther Of, TU Graz

**28 Apr 2014, 17:00–18:30; Location: S4|10-1 **

The coupling and finite and boundary element methods has been attractive for the numerical solution of boundary value problems of second order partial differential equations for decades. In particular, so-called non-symmetric formulations have been very popular in applications for a long time. Originally, the results on the stability of related discrete systems were quite unsatisfying, but significant progress has been made in the analysis of non-symmetric formulations in the last few years.

In this talk, recent results on the stability of these formulations are summarized and supported by numerical examples. The use of fast boundary element methods for the coupling is demonstrated for fluid-structure interaction problems within the design of ships.

## Mathematical Modeling of Magnetostrictive Materials

### Mané Harutyunyan and Bernd Simeon, Felix-Klein-Center for Mathematics, University of Kaiserslautern

**28 Apr 2014, 16:15; Location: S2|17/103, Schlossgartenstrasse 8**

Magnetostrictive materials belong to the large class of Smart Materials, which change their mechanical behavior and properties in response to the application of an external field, such as temperatur or magnetic field. Particularly magnetostrictive elastomers, composites consisting of ferrous particles dispersed in an elastomeric matrix, can exhibit large deformations due to an externally apllied magnetic field. In the recent years, magnetostrictive materials have attracted considerable interests due to their wide application areas, e.g. as variable stiffness-devices and high-strain actuators in mechanical systems, artifical muscles as well as sensors or actuators in robotics.

From a mathematical point of view, the modeling of MS materials involves the strong (two-way) coupling of the magnetic and mechanical fields in the stationary case and an three-field-coupling with the electric field in the transient case. By talking additionally into account geometric and material nonlinearities, the modeling can become rather tedious and complicated. Thus, there is a clear need for reduces mathematical models, which provide better understanding of the complex phenomenon of coupling.

This talk is aimed at the description of the magneto-mechanical coupling using a simple model of an isotropic, ferromagnetic Euler-Bernoulli-beam. The main focus lies on the mathematical formulation of a coupled magneto-elastic problem by means of a PDE with appropriate boundary conditions. The numerical modeling is performed by using the Finite Element Method. Starting with linear material laws and the stationary case, the model is extended step by step, considering also material and geometric nonlinearities as well as time-dependent analysis. To confirm the theoretical results, corresponding simulations carried out with Matlab are presented.

## Software Tool Support for Discrete Adjoint Methods – A Discrete Adjoint Version of OpenFOAM

### Uwe Naumann, Software and Tools for Computational Engineering (STCE), RWTH AAchen

**28 Apr 2014, 15:30–17:00; Location: S4|10-1 **

This talk will focus on issues arising in the context of the generation of a discrete adjoint version of OpenFOAM using algorithmic differentiation (AD). The existing discrete adjoint version of OpenFOAM is based on our AD tool dco/c++ (derivate code by overloading in C++). It uses the adjoint MPI (AMPI) library for adjoint message passing. Prototyped at STCE and distributed by the Numerical Algorithms Group (NAG) Ltd., Oxford, UK, dco/c++ is actively used by various academic and industrial partners, including a number of tier-1 investment banks. It uses state of the art C++ for highly generic code exploiting meta-programming techniques, modern software development patterns, and a cache optimized internal data layout. Support is provided for the first- and higher-order tangents and adjoints, as well as interfaces to related tools and libraries.

## Generating performance bounds from source code

### Dr. Krishna Narayanan, Mathematics and Computer Science Division, Argonne National Lab, USA

**22 Apr 2014, 15:45–17:15; Location: S4|10-1 **

In this talk he will discuss the source analysis tool PBound, that estimates the upper performance bounds of C and Fortran applications. Built on the ROSE compiler framework, PBound generates parametrized expressions for different types of memory accesses and integer and floating-point computations. Additionally, PBound uses application reuse distance analysis to model memory behavior. Architectural parameters are then incorporated to estimate upper bounds on the performance of the application on the particilar system. He will also present validation results for several codes on two architectures and show examples of PBound`s use in autotuning. He will also discuss extensions that are planned.

## Uncertainty quantification in a computationally optimised volume conductor model for deep brain stimulation

### Dr.-Ing. Christian Schmidt, University of Rostock

**17 Feb 2014, 16:15–17:00; Location: S2|17-103 **

Deep brain stimulation (DBS) has evolved as a widely employed procedure to treat the symptoms of motor skill disorders such as Parkinson’s disease, essential tremor and dystonia. Although successfully employed across various clinical fields, the fundamental mechanisms of action of DBS remain uncertain. Starting in the last decade, many computational models to gain insight into these mechanisms have been developed. One branch of these computational models focuses on the prediction of the volume of tissue activated (VTA) occurring during DBS. However, the parameters of these volume conductor models are subject to uncertainty and knowledge on how this uncertainty influences the predicted neural activation is scarce. This additional information on the probability distribution of the VTA could help engineers as well as clinicians in evaluating the actual activated area and rating the likelihood of undesired activation, but is computational intensive if classical methods such as Monte Carlo simulations are applied.

The polynomial chaos technique (PCT) provides a surrogate model based on a multi-variate polynomial expansion, which expansion coefficients are determined by multi-dimensional numerical integration. The PCT combined with the application of sparse grids for the numerical integration can substantially reduce the computational expense for the analysis of the probabilistic VTA. In addition, the implemented PCT is non-intrusive, which means that the deterministic model remains unchanged and can be used as kind of a “black-box”. The talk will present the implementation of the PCT in combination with a generated finite element model of the human brain to quantify the influence of uncertain model parameters on the uncertainty in the probabilistic VTA.

## Discrete adjoints for MPI-parallelized C++ models with an application to the NASA/JPL Ice Sheet System Model

### Dr. Jean Utke, Argonne National Laboratory, U.S.A.

**6 Feb 2014, 17:00–18:30; Location: S4|10-1 **

Computing discrete adjoints by algorithmic differentiation (AD) enables gradient-based optimization for high-dimensional problems. First, I introduce AD principles and implementation options (i.e. operator overloading) relevant to models written in C++ and parallelized with MPI. Then I present our ongoing work with the Ice Sheet System Model (ISSM), developed at NASA/JPL and UC Irvine and used by cryosphere scientists to project the future evolution of polar ice caps such as Greenland or Antarctica. Most of the model-specific AD effort relates to facilitating a type change (for the operator overloading) that is transparent to the developers, particularly to outside contributors unaware of the adjoint capabilities. Two important aspects I discuss are the binding to (external) solvers and the use of the AdjoinableMPI wrapper library to cover adjoining the MPI communication. The presentation will conclude with some performance results showing sources of overhead for the adjoint and options to mitigate them.

## Quantification of uncertainties in low frequency computational electromagnetics: from theory to applications: from theory to applications

### Prof. Stéphane Clénet, L2EP, Arts et Métiers ParisTech, Lille

**27 Jan 2014, 16:15–17:45; Location: S2|17-103**

In electrical engineering, the input data (dimensions, material properties, external inputs, etc.) of models are often assumed to be known exactly. The models as well as their outputs are then deterministic. However, in the real world, the input data are often stochastic. The dimensions of any device are known within a given uncertainty (tolerance) due to imperfections in the manufacturing process (machining. casting, punching, etc.). The characteristics of materials are also time-dependent due to ageing. To account for the variability of the input parameters, the stochastic approach can be used, the inputs as well as the outputs of the model are then random variables or fields. In the seminar, we present methods to solve stochastic finite element problems in electromagnetism. The stochastic approach will be illustrated by some examples in the domains of electrical machines and eddy current non-destructive testing.

## Noise reduction strategies for particle in cell simulations of tokamaks and stellarators

### Prof. Dr. Eric Sonnendrücker, Max-Planck-Institut für Plasmaphysik, Garching

**13 Jan 2014, 16:15–17:45; Location: S2|17-103**

Simulation of micro-turbulence in magnetic fusion plasmas either in stellarators or in tokamaks can be realised efficiently with Particle In Cell methods (PIC) provided some efficient noise reduction is achieved so that noise levels are well below signal levels. This can be achieved using a control variate method using a stochastic process associated with the Maxwellian equilibrium of the plasma, which is never very far from the actual distribution function. Specific problems arise in this method when diffusive collisions are added in the model.

After a short description of the physics of interest, we will show present our interpretation of noise reduction in PIC simulation and how it can also effectively be used with a weakly collisional model.