# CE Seminar

Together with the Computational Engineering Research Center of the TU Darmstadt a joint seminar with interesting talks in the field of CE is organized in every semester. If you are interested in these seminars and would like to receive invitations please subscribe for the corresponding mailing list.

## 2016

## Partial differential-algebraic equations from a systems theoretic perspective

### Prof. Dr. Timo Reis, University of Hamburg

**14 Nov 2016, 16:15–17:45; Location: S2|17-103**

We consider linear and time-invariant partial differential-algebraic equations and show that these can be represented as differential-algebraic equations on infinite-dimensional Banach spaces. We will discuss solution theory, decoupling and control by means of several examples in flow and electric circuit theory.

## Modeling and Simulation of Generalized Stefan Problems with Applications in Planetary Exploration

### Dr. Julia Kowalski, RWTH Aachen

**8 Nov 2016, 17:00–18:30; Location: S4|10-1**

We will discuss the modelling and simulation of generalized Stefan problems to describe phase-change processes in water ice. An idealized mathematical system for the classical Stefan problem consists of two free-boundary second-order partial differential equations for both the solid, and the liquid phase. The system is closed by assuming local energy balance at the interface, a constraint on the heat flux jump referred to as the Stefan condition. When wanting to model realistic applications, however, the situation is often more complicated, and e.g. involves convection in the melt or additional forces. This then results in complex mechanically coupled thermo-fluid-dynamical physical systems that require tailored numerical methods for the arising non-linear PDEs.

This presentation focuses on two specific generalizations to the classical Stefan problem along with their numerical solution. Both are inspired by the need for novel simulation methodologies in the context of innovative planetary exploration technologies. The first application addresses contact phase-change processes, in which a heat source is forced onto the ice. This results in a microscale melt film between the heat source and the solid water ice. The second application addresses the coupling of melting and re-freezing processes with natural convection in the liquid melt. For both situations, we will introduce and discuss the mathematical model and a tailored fixed-grid numerical solution strategy. Finally, we will present and discuss simulation results

## Amorphous Data-parallelism

### Prof. Keshav Pingali, University of Texas, Austin

**22 Sep 2016, 10:00–11:00; Location: S2|02-C110**

Although data-parallelism is ubiquitous in high-performance computing (HPC) algorithms, many algorithms in other areas such as graph analytics, machine learning, and VLSI placement and routing exhibit a more complex kind of parallelism that called “amorphous data-parallelism.”

In this talk, I describe a simple programming model called the operator formulation of algorithms for specifying amorphous data-parallelism, and a system called Galois for exploiting amorphous data-parallelism. Experimental results show that this is a practical approach for exploiting parallelism in complex, irregular applications that are beyond the capabilities of current commercial systems.

## Demonstration of Energy-Chirp Control in Relativistic Electron Bunches at LCLS Using a Corrugated Structure

### Dr. Karl Bane, SLAC National Accelerator Laboratory (CA, USA)

**12 Sep 2016, 16:15–17:45; Location: S2|17-103**

An experimental study is presented that uses a corrugated structure as a passive “dechirper” — i.e. as a device for removing energy chirp — in a high energy (4.4 ‐ 13.3 GeV) electron beam at the Linac Coherent Light Source (LCLS) at SLAC. Time‐resolved measurements of both longitudinal and transverse wakefields of the device are presented and compared with theory and simulations. We, in addition, demonstrate flexible control of the free electron laser (FEL) bandwidth for hard and soft X‐rays and present novel uses of the device in addition to energy chirp control.

## Simulation of electrical machines – A FEM-BEM coupling scheme

### Dr. Lars Kielhorn, TailSiT GmbH, Graz (Austria)

**25 Jul 2016, 16:15–17:45; Location: S4|10-314**

Electrical machines commonly consist of moving and stationary parts, e.g., an electric motor features a rotor and a stator. If volume based numerical schemes such as the Finite Element Method (FEM) are applied the electromagnetic simulation of such devices is a challenging task since the variation of the geometrical configuration needs to be incorporated into the numerical scheme. Contrary, a coupling scheme based on FEM together with Boundary Element Methods (BEM) neither hinges on re-meshing techniques nor deals with a special treatment of sliding interfaces. While the numerics are certainly more involved the reward is obvious: The modeling costs decrease and the application engineer is issued with an easy-to-use, versatile, and accurate simulation tool.

## Scientific Benchmarking of Parallel Computing Systems

### Prof. Dr. Torsten Hoefler, ETH Zurich

**12 Jul 2016, 16:00–17:30; Location: S4|10-1**

Measuring and reporting performance of parallel computers constitutes the basis for scientific advancement of high-performance computing (HPC). Most scientific reports show performance improvements of new techniques and are thus obliged to ensure reproducibility or at least interpretability. Our investigation of a stratified sample of 120 papers across three top conferences in the field shows that the state of the practice is not sufficient. For example, it is often unclear if reported improvements are in the noise or observed by chance. In addition to distilling best practices from existing work, we propose statistically sound analysis and reporting techniques and simple guidelines for experimental design in parallel computing. We aim to improve the standards of reporting research results and initiate a discussion in the HPC field. A wide adoption of this minimal set of rules will lead to better reproducibility and interpretability of performance results and improve the scientific culture around HPC.

## Simulation of wave propagation problems for automated characterization of material parameters and defects

### Steven Vanderkerckhove, KU Leuven

**11 Jul 2016, 16:15–17:45; Location: S2|17-103**

Non-destructive testing and evaluation (NDT&E) by ultrasonic (or electromagnetic) waves is important for the identification of material parameters, for quality assurance and life-time control. Many NDT&E techniques and procedures are available, but generally rely on a considerable amount of a priori knowledge and expertise which are often expensive or even unavailable.

In this presentation, numerical field simulation combined with adjoint-based optimisation is considered as an automated NDT&E technique, which can overcome the need for a priori knowledge and expertise. Several techniques for wave speed determination will be evaluated and compared for an elastodynamic example. Also the problem of determining spatially dependent wave speeds will be discussed, with an emphasis on how to avoid important pitfalls.

All simulations have been executed using the discontinuous Galerkin finite element method implemented using DOLFIN/FEniCS. Adjoint computations are obtained from and handled by dolfin-adjoint.

## A new implementation of the MMPDE moving mesh method and applications

### Prof. Weizhang Huang, Ph.D., University of Kansas, Lawrence (USA)

**11 Jul 2016, 14:00–15:30; Location: S4|10-314**

The MMPDE moving mesh method is a dynamic mesh adaptation method for use in the numerical solution of partial differential equations. It employs a partial differential equation (MMPDE) to move the mesh nodes continuously in time and orderly in space while adapting to evolving features in the solution of the underlying problem. The MMPDE is formulated as the gradient flow equation of a meshing functional that is typically designed based on geometric, physical, and/or accuracy considerations. In this talk,I will describe a new discretization of the MMPDE which gives the mesh velocities explicitly, analytically, and in a compact matrix form. The discretization leads to a simple, efficient, and robust implementation of the MMPDE method. In particular, it is guaranteed to produce nonsingular meshes. Some applications of the method will be discussed, including mesh smoothing (to improve mesh quality), generation of anisotropic polygonal meshes, and the numerical solution of the porous medium equation and the regularized long-wave equation.

## Oscillation in a posteriori error estimation

### Prof. Dr. Andreas Veeser, Università degli Studi di Milano

**7 Jul 2016, 17:00–18:30; Location: S4|10-314**

The goal of an a posteriori error analysis for an approximate PDE solution is to establish the equivalence of error and a posteriori estimator. Unfortunately, this equivalence is often only up to so- called oscillation terms.

In this talk we shall clarify the reasons for the presence of oscillation and point out shortcomings of common oscillation terms. Moreover, we propose a new approach to a posteriori error estimation, where oscillation can be bounded by the error and so does not longer spoil the aforementioned equivalence.

This is joint work with Christian Kreuzer (Bochum).

## Advances in Massively Parallel Electromagnetic Simulation Suite ACE3P

### Dr. Oleksiy Kononenko, SLAC National Accelerator Laboratory/Stanford University

**1 Jul 2016, 10:00–11:30; Location: S2|17-114**

ACE3P is a 3D massively parallel electromagnetic simulation suite that has been developed at SLAC National Accelerator Laboratory for the past decades. This set of codes is based on the finite-element method so that geometries of complex structures can be represented with high fidelity through conformal grids, and high solution accuracies can be obtained through high-order basis functions. Using high performance computing, ACE3P has provided a unique capability for large-scale simulations for the design, optimization and analysis of accelerating structures and systems. Running on state-of-the-art supercomputers, parallel electromagnetics computation has enabled the design of accelerating cavities to machining tolerances and the analysis of accelerator systems to ensure operational reliability. In this talk we give an introduction to ACE3P, present the underlying mathematical models, demonstrate electromagnetic and multiphysics simulation capabilities as well as discuss selected applications for particle accelerators and beyond.

## Nanoelectronic coupled problem solutions: methods and applications

### Dr. E. Jan W. ter Maten, Bergische Universität Wuppertal

**27 Jun 2016, 16:15–17:45; Location: S2|17-103**

The project nanoCOPS is a collaborative research project within the FP7-ICT research program funded by the European Union. The consortium comprises experts in mathematics and electrical engineering from six universities (BU Wuppertal, HU Berlin, Brno UT, TU Darmstadt, FH Oberösterreich Hagenberg im Mühlkreis, EMAU Greifswald), a research institute (Max-Planck-Institute Magdeburg), three industrial partners from semiconductor industry (NXP Semiconductors, Eindhoven, the Netherlands; ON Semiconductor, Oudenaarde, Belgium; ACCO Semiconductor, Louveciennes, France) and a vendor for electromagnetic simulation tools (MAGWEL, Leuven, Belgium).

We present an overview of innovative solutions in the steps for nanoelectronic design and coupled simulation: modelling aspects, multirate time integration, model order reduction, uncertainty quantification, robust optimization, fast fault simulation. We illustrate these for several industrial applications.

**References**

[1] H.H.J.M. Janssen, P. Benner, K. Bittner, H.-G. Brachtendorf, L. Feng, E.J.W. ter Maten, S. Sch ̈ ops, R. Pulch, W. Schoenmaker, C. Tischendorf: The European Project nanoCOPS for Nanoelectronic Coupled Problems Solutions. To appear in: G. Russo, V. Capasso, G. Nicosia, V. Romano (Eds): Progress in Industrial Mathematics at ECMI 2014, Series Mathematics in Industry 22, Springer, 2016.

[2] R. Janssen, J. ter Maten, C. Tischendorf, H.-G. Brachtendorf, K. Bittner, W. Schoenmaker, P. Benner, L. Feng, R. Pulch, F. Deleu, A. Wieers: The nanoCOPS project on algorithms for nanoelectronic coupled problems solutions. In B. Schreffler, E. Oñate, M. Papadrakakis (Eds.): Coupled Problems in Science and Engineering VI – COUPLED PROBLEMS 2015. Proceedings of the VI International Conference on Coupled Problems in Science and Engineering, May 18-20, 2015, San Servolo Island, Venice, Italy. Publ.: CIMNE – International Center for Numerical Methods in Engineering, Barcelona, Spain, 2015, ISBN 978-84-943928-3-2, pp. 1029–1036, 2015.

[3] E.J.W. ter Maten, P. Putek, M. Günther, R. Pulch, C. Tischendorf, C. Strohm, W. Schoenmaker, P. Meuris, B. De Smedt, P. Benner, L. Feng, N. Banagaaya, Y. Yue, R. Janssen, J.J. Dohmen, B. Tasić, F. Deleu, R. Gillon, A. Wieers, H.-G. Brachtendorf, K. Bittner, T. Kratochvíl, J. Petřzela, R. Sotner, T. G ̈ otthans, J. Dřínovský, S. Schöps, D.J. Duque, T. Casper, H. De Gersem, U. Römer, P. Reynier, P. Barroul, D. Masliah, B. Rousseau: Nanoelectronic COupled Problems Solutions – nanoCOPS: Modelling, Multirate, Model Order Reduction, Uncertainty Quantification, Fast Fault Simulation. To appear in Journal for Mathematics in Industry (Springer).

**Acknowledgement**

NanoCOPS is supported by the EU FP7-ICT-2013-11 Programme under Grant Agreement Number 619166 (Project nanoCOPS – nanoelectronic COupled Problems Solutions). For further details see http://www.fp7-nanocops.eu/.

## Do You Know What Your I/O is Doing?

### Prof. William Gropp, University of Illinois, Urbana-Champaign (USA)

**24 Jun 2016, 10:30–12:00; Location: S2|02-C120**

Even though supercomputers are typically described in terms of their floating point performance, science applications also need significant I/O performance for all parts of the science workflow. This ranges from reading input data, to writing simulation output, to conducting analysis across years of simulation data. This talk presents recent data on the use of I/O at several supercomputing centers and what that suggests about the challenges and open problems in I/O on HPC systems.

## Finite Element Method on uniform and adaptive polygonal and polyhedral meshes

### Prof. Dr. Sergej Rjasanow, Universität des Saarlandes

**30 May 2016, 16:15–17:45; Location: S2|17-103**

In the development of numerical methods to solve boundary value problems the requirement of flexible mesh handling gains more and more importance. The BEM-based finite element method [1] is one of the new promising strategies which yield conforming approximations on polygonal and polyhedral meshes, respectively. This flexibility is obtained by special trial functions which are defined implicitly as solutions of local boundary value problems related to the underlying differential equation in the spirit of Trefftz [2]. Due to this construction, the approximation space already inherit some properties of the unknown solution. These implicitly defined trial functions are treated by means of boundary element methods (BEM) in the realisation.

The presentation gives a short introduction into the BEM-based FEM and deals with recent challenges and developments. The basic idea in the construction of trial functions is generalised, and thus, trial functions of arbitrary order are obtained [3]. Furthermore, by using a posteriori error estimates, it is possible to achieve optimal rates of convergence even for problems with non-smooth solutions on adaptive refined polygonal meshes [4]. Several numerical examples confirm the theoretical results.

**References**

[1] S. Rjasanow and S. Weißer. FEM with Trefftz trial functions on polyhedral elements. Journal of Computational and Applied Mathematics, 263:202-217, 2014.

[2] E. Trefftz. Ein Gegenstück zum Ritzschen Verfahren. In Proceedings of the 2nd International Congress of Technical Mechanics, Orell Fussli Verlag, 131-137, 1926.

[3] S. Rjasanow and S. Weißer. Higher order BEM-based FEM on polygonal meshes.SIAM Journal on Numerical Analysis, 50(5):2357-2378, 2012.

[4] S. Weißer. Residual error estimate for BEM-based FEM on polygonal meshes. Numer. Math., 118(4):765-788, 2011.

**Acknowledgement**

This talk is based on joint research together with Steffen Weißer.

## Multi-Trace Boundary Element Methods for Scattering

### Prof. Dr. Ralf Hiptmaier, ETH Zurich

**9 May 2016, 16:15–17:45; Location: S2|17-103**

We consider the scattering of acoustic or electromagnetic waves at a penetrable object composed of different homogeneous materials, that is, the material coefficients are supposed to be piecewise constant in sub-domains. This makes possible to recast the problem into boundary integral equations posed on the interfaces. Those can be discretized by means of boundary elements (BEM). This approach is widely used in numerical simulations and often relies on so-called first-kind single-trace BIE, also known as PMCHWT scheme in electromagnetics. These integral equations directly arise from Calderón identities, but after BEM discretization give rise to poorly conditioned linear systems, for which no preconditioner seems to be available so far.

As a remedy we propose new multi-trace boundary integral equations; whereas the single-trace BIE feature unique Cauchy traces on sub-domain interfaces as unknowns, the multi-trace idea takes the cue from domain decomposition and tears the unknowns apart so that local Cauchy traces are recovered. Two of them live on each interface and thus we dub the methods “multi-trace”. The benefit of localization is the possibility of Calderón preconditioning.

Multi-trace formulations come in two flavors. A first variant, the global multi-trace approach, is obtained from the single-trace equations by taking a “vanishing gap limit” [1]. The second variant is the local multi-trace method and is based on local coupling across sub-domain interfaces [3]. Numerical experiments for acoustic scattering demonstrate the efficacy of Calderón preconditioning.

**References**

[1] X. Claeys and R. Hiptmair, Multi-trace boundary integral formulation for acoustic scattering by composite structures, Communications on Pure and Applied Mathematics, 66 (2013), pp. 1163–1201.

[2] X. Claeys, R. Hiptmair, C. Jerez-Hanckes, and S. Pintarelli, Novel multi-trace boundary integral equations for transmission boundary value problems, in Unified Transform for Boundary Value Problems: Applications and Advances, A. Fokas and B. Pelloni, eds., SIAM, Philadelphia, 2014, pp. 227–258.

[3] R. Hiptmair and C. Jerez-Hanckes, Multiple traces boundary integral formula-tion for Helmholtz transmission problems, Adv. Comput. Math., 37 (2012), pp. 39–91.

[4] R. Hiptmair, C. Jerez-Hanckes, J.-F. Lee, and Z. Peng, Domain decom-position for boundary integral equations via local multi-trace formulations, in Do-main Decomposition Methods in Science and Engineering XXI., J. Erhel, M. Gander,L. Halpern, G. Pichot, T. Sassi, and O. Widlund, eds., vol. 98 of Lecture Notes in Computational Science and Engineering, Springer, Berlin, 2014, Proceedings of the XXI. International Conference on Domain Decomposition Methods, Rennes, France, June 25-29, 2012 I, pp. 43–58.

**Acknowledgement**

This talk is based on joint research together with X. Claeys (LJLL, UPMC, Paris) and C. Jerez-Hanckes (Pontificia Universidad Católica de Chile, Santiago de Chile).

## Sweeping Preconditioning, Source Transfer and optimized Schwarz Methods

### Dr. Martin Gander, Université de Genève

**2 May 2016, 16:15–17:45; Location: S2|17-103**

Absorbing boundary conditions and perfectly matched layers are not only useful for the truncation of computational domains, they can also be very effectively used to obtain preconditioners. This was first realized in the context of Schwarz methods about 20 years ago, in the research group around Frederic Nataf and Laurence Halpern, and led to the class of optimized Schwarz methods. Around the same time, very similar mathematical concepts also appeared in approximate factorizations, which led to the class of frequency filtering and AILU preconditioners. More recent interest in these methods was sparked by the difficulty to solve Helmholtz and Maxwell problems by iterative methods, and the introduction of the sweeping preconditioners by Engquist et al. and source transfer domain decomposition methods by Chen et al. I will present the relation between all these techniques in the context of optimal and optimized Schwarz methods.

## Numerical models of the subsurface – applications and challenges

### Prof. Dr. Andreas Henk, TU Darmstadt

**26 Apr 2016, 17:00–18:30; Location: S4|10-1**

The subsurface holds numerous resources ranging from groundwater and minerals to energy (e.g., geothermal, hydrocarbons), but it can also act as storage site for CO2 and nuclear waste. Any optimal and safe utilization of the deep subsurface requires a thorough understanding of the thermal, chemical, mechanical and hydraulic processes acting at up to 6 km depth, i.e. at pressures of 150 MPa and temperatures of 200°C. In the past decade numerical models have gained increasing importance in geoscience as a tool to study these processes and for scenario testing. Numerical modeling carried out in the Engineering Geology section at TU Darmstadt focuses primarily on mechanical and hydraulic aspects in order to study the evolution of stress and deformation in the subsurface during both geological history and planned future use.

Following a short introduction the talk will go through the standard work flow used for subsurface modeling based on finite element techniques and will present some case studies ranging from gas fields in northern Germany to a demonstration site for CO2 sequestration in Australia. These examples highlight the potential but also the current challenges for such numerical simulations, e.g., complex model geometries which have to be built from surface (faults and horizons) information derived from geophysical measurements, parametrization of the models based on sparse and very local borehole information which has to be upscaled and extrapolated to the model domain. Some measured data are usually available to constrain the numerical simulation results locally but a quantitative assessment of the model uncertainties and key parameters remains difficult.

## Microstructural Aspects of Flow Simulation in Biomedical and Production Engineering Applications

### Prof. Marek Behr, Ph.D., RWTH Aachen University

**21 Apr 2016, 10:30–12:00; Location: S4|10-314**

Many incompressible flows of engineering interest involve fluids that are governed by complex constitutive relations. For viscoelastic fluids in particular, the advective nature of the constitutive equation requires numerical stabilization. Moreover, the use of three distinct variable fields means that two separate compatibility conditions must be satisfied. A stabilized stress-velocity-pressure formulation of Galerkin/Least-Squares type can provide stability at high Weissenberg numbers, and circumvents the compatibility condition on the velocity and stress interpolations. Further improvement can be achieved using a reformulation based on the logarithm of the conformation tensor. Development of such numerical methods is motivated by challenging applications in production technology, including shape optimization of plastics extrusion dies, and in bioengineering, including blood-handling devices such as heart pumps.

## An isogeometric boundary element method for electromagnetic analysis

### Dr. Robert Simpson, University of Glasgow

**19 Apr 2016, 10:00–11:30; Location: S2|17-103**

## An asynchronous One-shot method for HPC

### Dr. Torsten Bosse, Argonne National Laboratory, U.S.A.

**31 Mar 2016, 17:00–18:30; Location: S4|10-1**

The One-shot method for design optimization problems has been successfully implemented for various physical applications. To this end, a slowly convergent primal fixed-point iteration of the underlying state equation is augmented by an adjoint iteration and a corresponding preconditioned design update. Besides a suitable choice for the preconditioner, a 'correct' sequencing of the steps of the three iterations is most important to allow for an efficient optimization method.

The steps of the three iterations can be performed in various ways, e.g., in parallel, sequential, or combinations thereof. Naturally, each of the steps/tasks has to be allocated to a certain amount of computational resources on HPC platforms. This leads to the problem that either parts of the resources need to be reallocated to another task or that some of them are idle for a certain amount of time and, thus, performance loss. Therefore, we will motivate and present an asynchronous, parallel One-shot updating scheme with load-balancing in this talk.

The method itself is a generalization of all previously presented One-shot methods. Its local convergence can be enforced by the choice of the design update preconditioner and a suitable load-balance. For both of them, we will provide some ideas how they should be chosen. Furthermore, we will discuss how this approach can be implemented and give an outline of some problems that might arise in doing so.

## Thread and Data Mapping in Shared Memory Architectures

### Matthias Diener, PhD, Federal University of Rio Grande do Sul (UFRGS), Porto Alegre (Brazil)

**25 Feb 2016, 17:00–18:30; Location: S4|10-1**

Modern parallel architectures have a complex memory hierarchy, which consists of several cache levels and multiple memory controllers per system. In such architectures, the performance of memory accesses from a thread depends highly on which core the thread is executing and on which cache/memory controller the data is located. Therefore, it is important to understand and improve the memory access behavior of parallel applications in order to achieve optimal performance results.

In this talk, we present two techniques, thread and data mapping, that optimize the assignment of threads to cores and memory pages to memory controllers, taking into account the memory access behavior of the parallel application. Together, these techniques can achieve significant performance and energy efficiency improvements compared to traditional operating system policies.

## Asynchronous numerical scheme for modeling microwave/plasma couplings

### Dr. Ronan Perrussel, Université de Toulouse

**15 Feb 2016, 16:15–17:45; Location: S2|17-103**

We present an asynchronous method for the explicit integration of multiscale partial differential equations with a particular focus on microwave-plasma interactions. This method is restricted by a local CFL condition rather than the traditional global CFL condition. Moreover, contrary to other local time-stepping (LTS) methods, the asynchronous algorithm permits the selection of independent time steps in each mesh element. After the first developments of a scheme of the lowest order in time, we derived an asynchronous Runge-Kutta 2 (ARK2) scheme from a standard explicit Runge-Kutta method.

## Computational topology and hybrid high order methods for computational electromagnetics

### Prof. Dr. Ruben Specogna, Università di Udine

**8 Feb 2016, 16:15–17:45; Location: S2|17-103**

The efficient numerical solution of eddy current problems, whether formulated with differential or integral techniques, requires some topological pre-processing. Kotiuga, in his pioneering papers in the eighties, showed that computational topology is something that one should get familiar with when bumps into problems related with the design of (electromagnetic) potentials. In particular, one needs to construct a cohomology basis that can be computed in polynomial time.

Yet, in practice to compute this basis remained an open problem for more than twenty years given that all available implementations were too slow to be of any practical use, at least in computational electromagnetics.

We first survey the techniques used for the computation of the first cohomology group generators, which are essential for solving eddy current problems with formulations based on the magnetic scalar potential. We start by reviewing standard algebraic techniques and we end up with the novel paradigm of “lazy” (co)homology generators and the fast combinatorial techniques to compute them. They simplify considerably the topological pre-processing required by electromagnetic simulations to the point that the computational effort is reduced by five orders of magnitude. Recent efforts are oriented to optimize the cohomology basis, for example by reducing the support of representatives of the generators with combinatorial algorithms based on network flow.

Then, we emphasize the recent revival of integral formulations to solve eddy current problems, thanks to state-of-the-art sparsification techniques based on hierarchical matrices and Adaptive Cross Approximation (ACA). The suitable (co)homology generators for volumetric and boundary integral formulations are introduced. We also extend the computation of eddy currents to thin conductors represented by non-manifold triangulated surfaces, which are singular spaces (i.e. they have points whose neighborhoods are not Euclidean) where even Poincaré duality does not hold. Thanks to the theory of stratified spaces, the singular space is decomposed (or stratified) into manifold parts called strata. Certain conditions have to be imposed where the strata meet and we show that these conditions have a clear physical interpretation.

The second part of the presentation shows the first results on three-dimensional Poisson problems of the recently introduced discretization technique called hybrid high order (HHO) method. These methods allow to obtain discretization techniques for general polyhedral grids and of arbitrary order of approximation. They bear similarities with discontinuous Galerkin and especially with Brezzi's virtual element method (VEM), but they turn out to be more efficient. At the first order, they are akin to Finite Integration Technique (FIT)-like discretizations, whereas the link with FIT for high order elements is still unclear. Given that the solution to most practical electrostatic problems contain strong singularities, we developed a residual based error estimator for automatic mesh adaptivity. The solution of full Maxwell problems is an ongoing work.

## Finite element error estimates for elliptic Dirichlet-boundary control problems with pointwise state constraints

### Prof. Dr. Ira Neitzel, University of Bonn

**21 Jan 2016, 17:30–19:00; Location: S4|10-1**

PDE constrained optimal control problems with pointwise state constraints are known to cause certain theoretical and numerical difficulties. In this talk, we are concerned with an elliptic Dirichlet boundary control problem with low regularity of the state variable. After discussing first order optimality conditions in KKT-form, we will focus on presenting a priori error estimates for the finite element discretization of such linear-quadratic problems with pointwise state constraints in the interior of the domain. Numerical experiments will be provided.

Joint work with M. Mateos.