Together with the Computational Engineering Research Center of the TU Darmstadt a joint seminar with interesting talks in the field of CE is organized in every semester. If you are interested in these seminars and would like to receive invitations please subscribe for the corresponding mailing list.
Optimal Control of a Free Boundary Problem with Surface Tension Effects
Prof. Harbir Antil, George Mason University, Fairfax (USA)
17 Dec 2013, 11:00–12:30; Location: S4|10-1
We consider a PDE constrained optimization problem governed by a free boundary problem. The state system is based on coupling the Laplace equation in the bulk with a Young-Laplace equation on the free boundary to account for surface tension, as proposed by P. Saavedra and L.R. Scott. This amounts to solving a second order system both in the bulk and on the interface. Our analysis hinges on a convex constraint on the control such that the state constraints are always satisfied. Using only first order regularity we show that the control to state operator is twice Fréchet differentiable. We improve slightly the regularity of the state variables and exploit this to show existence of a control together with second order sufficient optimality conditions. Next we prove the optimal a priori error estimates for the control problem and present numerical examples. Finally, we give a novel analysis for a more practical model with Stokes equations in the bulk and slip boundary conditions on the free boundary interface.
Using Automated Performance Modeling to Find Scalability Bugs in Complex Codes
Prof. Dr. Felix Wolf, FZ Jülich
3 Dec 2013, 10:00–11:30; Location: S4|10-1
Many parallel applications suffer from latent performance limitations that may prevent them from scaling to larger machine sizes. Often, such scalability bugs manifest themselves only when an attempt to scale the code is actually being made – a point where remediation can be difficult. However, creating analytical performance models that would allow such issues to be pinpointed earlier is so laborious that application developers attempt it at most for a few selected kernels, running the risk of missing harmful bottlenecks. In this paper, we show how both coverage and speed of this scalability analysis can be substantially improved. Generating an empirical performance model automatically for each part of a parallel program, we can easily identify those parts that will reduce performance at larger core counts. Using a climate simulation as an example, we demonstrate that scalability bugs are not confined to those routines usually chosen as kernels.
Tackling the Software Stack for Heterogeneous Multi-Processors: a balancing act between work-load and memory organisation
Tackling the Software Stack for Heterogeneous Multi-Processors: a balancing act between work-load and memory organisation
2 Dec 2013, 14:00–15:30; Location: S4|10-1
In today's economic climate, time to market, or more generally time to solution, is critical. At the same time, mainstream hardware does not only become massively parallel, it also becomes much more diverse: Graphics Processors (GPUs), large vector machines such as Intel's Xeon-Phi, and even programmable hardware in the form of FPGAs find an increasing presence. While this is very exciting from a computer science perspective, for the application programmer it typically just constitutes an unpleasant hurdle.
We try to overcome this hurdle by working on techniques that lead to tools that combine high-productivity and high-performance. We compile architecture agnostic code into target specific high-performance code. Our tools deliver performance close to hand optimised code for various architectures including SMPs, GPGPUs, and prototypical many-core machines without requiring any program changes or annotations.
In this talk, I present some of the key insights from looking at the different compilation technologies that are required when targeting different multi-core systems. I will mainly focus on the role of memory organisation for achieving performance portability. In that context, I present one of our latest developments, where we use type-guided code generation in order to change memory layouts for improved code vectorisations.
High Productivity Parallel Programming with SAC and S-Net: from Language Design to Compilers and Runtime Systems
Prof. Dr. Clemens Grelck, University of Amsterdam
2 Dec 2013, 10:00–11:30; Location: S4|10-1
Parallel Programming has long closely been tied to high performance computing. Today's ubiquity of multi-core chip architectures radically changes this: parallel programming moves from a niche market into the main stream of computing. At the same time, hardware becomes more and more diverse: varying numbers of cores with complex cache hierarchies, general-purpose graphics accelerators and other heterogeneous architectures like Intel's Xeon Phi accelerator, all with their specific, more or less machine-oriented programming models, challenge today's and even more so tomorrow's programmers. These rapid changes concern experienced hpc programmers and average software engineers alike. Since traditional software no longer automatically benefits from hardware innovation, new programming models are needed that reconcile productivity, portability and performance in the presence of modern compute architectures.
We present two complementary high-productivity programming models for parallel systems and their associated tool chains: SAC and S-Net. SAC (Single Assignment C) is a declarative array language that adopts syntactic conventions of C/C++/C#/Java for ease of transition. SAC features multidimensional arrays as abstract values with certain structural properties. Functions receive arrays as argument values and produce new argument values. How arrays manifest in memory (or if at all) is up to compiler and runtime system. The abstract view on data in conjunction with state-free semantics supports aggressive compiler optimisation for sequential execution performance and fully compiler-directed acceleration on contemporary multi- and many-core architectures.
S-Net is a declarative coordination language for explicit concurrency engineering. S-Net achieves a near-complete separation of concerns between the development of sequential or implicitly parallel algorithmic building blocks by domain experts (e.g. using SAC) on the one hand and the exposure of coarse-grained concurrency to an underlying execution substrate by concurrency experts on the other hand. This substrate turns ordinary sequential code into a streaming network of asynchronous components. While concurrency is dealt with explicitly, algorithmic aspects are thoroughly separated from organisational matters of parallel program execution such as synchronisation and communication. It is exactly the intertwining of these two aspects that is widely considered the main cause for the reputation of parallel programming as being notoriously difficult.
Two-scale simulation of electromagnetic devices using harmonic and pulse-width-modulation time-domain basis functions
Prof. Johan Gyselinck, Université libre de Bruxelles
18 Nov 2013, 16:15–17:45; Location: S2|17-103
Accelerators for Technical Computing?
Dieter an Mey, RWTH Aachen
13 Nov 2013, 16:30–18:00; Location: S4|10-1
“More bang for the buck” is the promise of deploying accelerators attached to commodity host processors for technical computing. Since NVIDIA introduced the CUDA programming model in 2006, more and more programmers started using graphics processors for technical computing. Amazing success stories reported high speed-up rates of applications accelerated by GPUs. But programmers had to find out that GPUs are not particularly easy to program using CUDA and portability is lost. New programming approaches promising to increase programmability and/or portability are published more frequently than any user may want to adapt his or her program code: OpenCL, PGI Accelerator, Open ACC and finally OpenMP 4.0. After Larrabee was cancelled in 2009 it took Intel until 2012 to come up with a competing product with the promise to combine performance, portability and programmability: the Xeon Phi coprocessor. We take a holistic view on the deployment of accelerators which are already part of over 10% of the world’s fastest computers listed the last Top500 list.
IsoGeometric analysis methods for multiphysics problems
Carlo de Falco, Ph.D., Politecnico di Milano CEN Centro di Nano Medicina, Milan
4 Nov 2013, 16:15–17:45; Location: S2|17-103
The concept of Isogeometric Analysis (IGA) was introduced by Hughes with the aim of bridging the gap between computer aided design (CAD) and the finite element method. This aim is pursued by adopting the same Non Uniform Rational B-Spline (NURBS) basis functions to construct both trial and test spaces in the discrete variational formulation of differential problems as are used to design domain geometries in CAD applications. As an additional benefit with respect to standard finite elements, the use of NURBS functions allows to construct finite dimensional spaces of higher regularity. In this talk we discuss the general concepts of IGA and show some interesting applications to simulations arising from different fields of physics, including incompressible fluid dynamics, electromagnetics (in particular: accelerator cavities), quantum mechanics and linear elasticity.
This talk is provided together with the department of electrical engineering and information technology.
Firedrake and (adjoint) FEniCS: is automatic parallel simulation a mythical beast?
Dr. David Ham, Imperial College London
1 Nov 2013, 13:00–14:30; Location: S4|10-1
Pushing back the frontiers of simulation technology in science and engineering becomes ever more complex. Better simulations of more realistic systems require greater complexity in both equations and numerics, while the shift from increasing processor speed to massive fine-scale parallelism radically increases the complexity of the software implementation. The result, using conventional software development in low-level languages, is that the creation of simulation software is intricate, error-prone and massively labour-intensive.
In this seminar I will present a radically different approach. By employing multiple layers of abstraction between the hardware and the numerics, the Firedrake project, achieves a high degree of separation of concerns between the applications, numerics, and parallel performance aspects of simulation software development. The scientist or engineer can write equations in a high-level mathematical language, and the parallel implementation for the hardware available (shared and/or distributed memory, CPU or GPU) is automatically generated at runtime.
In addition, the high-level mathematical structure of the code facilitates the automation of reasoning about the model. We have exploited this to automatically produce adjoint simulations which execute in parallel with near-optimal efficiency. This enables engineers and scientists using our systems to move beyond straightforward simulation to advanced design optimisation techniques, error estimation, stability analysis and data assimilation.
Sound and Vibration from Friction between Soft Materials under Light Loads
Prof. Adnan Akay, BILKENT University, Ankara (Turkey)
24 Oct 2013, 17:00–18:30; Location: S4|10-1
Understanding the properties and consequences of friction under light normal loads is fundamental to further advancing areas such as tactile sensing, haptic systems used in robotic gripping of sensitive objects, and characterization of products that range from the softness of fabrics to effects of surfactants, such as lotions, on skin. In tactile sensing, as a finger is lightly rubbed over a surface, the mechanoreceptors in the dermis become excited and send signals to the brain for processing. Their excitation results from the asperities, adhesion, and other geometric and chemical surface properties that come into contact with the skin. These same sources also give rise to vibration and sound as two surfaces are in sliding contact even under light load, such as a finger pad over a silk fabric. Whereas the mechanoreceptors respond around 200 – 300 Hertz, spectrum of the actual sounds and vibrations that are generated can go beyond these values, thus presenting additional opportunities for surface characterization through acoustic response. A modest body of literature exists on the acoustic response of soft surfaces under friction. However, only a limited number of those address friction sounds and vibrations under light loads. Much of the previous work in this area relates to perception and tactile sensing with limited attention to the generation mechanisms of sound and vibration between soft surfaces. This paper describes a new apparatus to measure friction simultaneously with dynamic quantities such as accelerations, forces, and sound pressures resulting from a light contact over a soft material, much like a friction finger lightly rubbing over a soft material.
First principles applied to submersible maneuvering
Prof. Dr. Hernani Brinati, University of Sao Paulo, Brazil
22 Oct 2013, 17:00–18:30; Location: S4|10-1
An analytical-numerical model based on first principles for representation of the motions of a submarine in the horizontal and in the vertical plane is presented. The model is implemented through computational algebra, being its parameters expressed as function of the main dimensions and form coefficients of the hull and its appendices, which enables its application since the conception stages of the vehicle design. The validity of the model is verified through the comparison between calculated and available trajectories of course change and dive.
Wang Tilings in Synthesis of Microstructure Informed Enrichment Functions With Application in Trefftz Method
Jan Novak, Ph.D., Czech Technical University in Prague
21 Oct 2013, 16:15–17:45; Location: S2|17-103
The sustainable environmental tendencies lead to a highly optimized design of majority of consumer products. In Materials Engineering, this is mirrored by a race towards miniaturization and top product performance. A potential success is seen in a detailed understanding and incorporation of characteristic physical processes taking place at materials microstructure in macro-scale design, which is the goal of Novak’s research agenda.
Since their formal definition in 1961, Wang tilings have been the subject of vigorous studies in Discrete Mathematics and gained an extensive attention in Computer Graphics, Game industry, Theory of quasicrystals and Biology. In the present talk, Dr. Novak addresses yet another application of this tool combined with image analysis and spatial statistics to compression and reconstruction of materials with disordered microstructures. Moreover, a discussion of the potential of Wang tiles in synthesis of microstructural enrichment functions for Generalized Finite Element environments will be given and accompanied with preliminary outcomes.
Performance Modeling for Performance Autotuning
Dr. Paul Hovland, Argonne National Laboratory, U.S.A.
30 Sep 2013, 17:00–18:30; Location: S4|10-1
We describe our efforts in performance modeling in the context of automatic performance tuning. We consider analytic models constructed through source code analysis, semi-analytic models constructed with a combination of source code analysis and empirical measurement, and fully empirical models. We describe several uses for performance models in conjunction with autotuning, including surrogate construction for algorithm evaluation, surrogate-based search, and bounds analysis for search space truncation. We focus on models for performance (execution time), but also describe briefly models for power and energy.
Numerical simulation of Bingham fluids by semismooth Newton methods: a general survey
Prof. Sergio Gonzalez, Ph.D., Escuela Politécnica Nacional Quito (Ecuador)
26 Sep 2013, 17:00–18:30; Location: S4|10-1
In this talk we show a new approach to the numerical simulation of Bingham fluids, based on semismooth Newton methods. From pipe flow to thermally convective flow, we present the challenges and the modeling and numerical tools to overcome such issues.
Adaptive Asynchronous Parallel Calculations at Petascale Using Uintah
Prof. Dr. Martin Berzins, University of Utah
17 Sep 2013, 17:00–18:30; Location: S4|10-1
The past, present and future scalability of the Uintah Software framework is considered with the intention of describing a successful approach to large scale parallelism. Uintah allows the solution of large scale fluid-structure interaction problems through the use of fluid flow solvers coupled with particle-finite element based solid methods. In addition Uintah uses a combustion solver to tackle a broad and challenging class of turbulent combustion problems. A unique feature of Uintah is that it uses an asynchronous task-based approach with automatic load balancing to solve complex problems using techniques such as adaptive mesh refinement. At present, Uintah is able to make full use of present-day massively parallel machines as the result of three phases of development over the past dozen years. These development phases have led to an adaptive scalable run-time system that is capable of independently scheduling tasks to multiple CPUs cores and CPUs on a node. In the case of solving incompressible low-mach number applications it is also necessary to use linear solvers and to consider the challenges of radiation problems. The approaches adopted to achieve present scalability are described and their extensions to possible future architectures is considered.
An Efficient Layerwise Finite Element Model for Active Vibration Control of Piezolaminated Composite Shells considering Strong Electric Field Nonlinearity
Prof. Santosh Kapuria, Indian Institute of Technology Delhi
18 Jul 2013, 17:00–18:30; Location: S4|10-1
In this work, we present a computationally efficient finite element (FE) model for statics, dynamics and vibration control of smart composite and sandwich shallow shells integrated with piezoelectric sensors and actuators, considering their nonlinear characteristics under strong electric field. The nonlinearity is modeled using the rotationally invariant nonlinear constitutive equations of Tiersten (1993), with the assumption of large electric field and small strains. The FE is based on the fully coupled zigzag theory, which has accuracy similar to the layerwise theories, and but retains the economy of the equivalent single-layer theories, with only five displacement unknowns. The nonlinear equations are derived using the extended Hamilton’s principle of virtual work. For static analysis, these equations are solved using the direct iteration method. For active vibration control, the FE model is transformed to the reduced order modal space considering first few modes and expressed in the state space form. The resulting nonlinear control problem with LQG controller is solved using the feedback linearization approach. The results predicted by the nonlinear model compare very well with the experimental data available in the literature for static response. The effect of the piezoelectric nonlinearity on the static response and active vibration control is studied for piezoelectric bimorph as well as hybrid laminated plates with isotropic, composite and sandwich substrates. The linear model significantly overestimates the peak control voltage required to achieve a given settling time. While in the linear model, the control voltage for a given settling time is almost independent of the actuator thickness, its nonlinear prediction reduces significantly with the decrease in the actuator thickness.
Finite Element Based Deformable Object Tracking from a Single Viewpoint
Jochen Lang, Ph.D., University of Ottawa
10 Jul 2013, 14:00–15:30; Location: S3|05-074
In this talk, I will present our recent work on robustly tracking objects that deform due to an external applied force. The geometry of the object is reconstructed over time based on noisy observations from a single viewpoint. A template mesh of the object in its rest-state is fit to observations in a nonlinear optimization. We use a redundant parameterization of smoothly varying local mesh transformations. While for the observed part of the object, the data term guides the optimization, the unobserved parts of the mesh are only governed by the smoothness term. In a second optimization, we improve the location of the unobserved vertices based on elastic solid deformation solved with finite elements. Synthetic results illustrate the ability of our method to estimate accurate deformations of observed and unobserved parts of an object despite incomplete noisy measurements. We will demonstrate results for deformations recorded by low-quality pointclouds captured either with a commercial stereo camera or a structured light system. This is joint work with Stefanie Wuhrer, Motahareh Tekieh and Chang Shu.
This talk is provided together with the department of Computer Science.
MPI + MPI: a new hybrid approach to parallel programming with MPI plus shared memory
Prof. Torsten Höfler, ETH Zurich
9 Jul 2013, 17:00–18:30; Location: S4|10-1
Hybrid parallel programming with the message passing interface (MPI) for internode communication in conjunction with a shared-memory programming model to manage intranode parallelism has become a dominant approach to scalable parallel programming. While this model provides a great deal of flexibility and performance potential, it saddles programmers with the complexity of utilizing two parallel programming systems in the same application. We introduce an MPI-integrated shared-memory programming model that is incorporated into MPI through a small extension to the one-sided communication interface. We discuss the integration of this interface with the MPI 3.0 one-sided semantics and describe solutions for providing portable and efficient data sharing, atomic operations, and memory consistency. We describe an implementation of the new interface in the MPICH2 and Open MPI implementations and demonstrate an average performance improvement of 40 % to the communication component of a five-point stencil solver.
Modeling flows in complex geometries using volume penalization: analysis of the penalized Stokes operator and applications to the Navier-Stokes equations
Prof. Kai Schneider, University Aix-Marseille
4 Jul 2013, 17:00–18:30; Location: S4|10-1
Penalization approaches are nowadays commonly employed to solve boundary or initial-boundary value problems. They consist in embedding the original, possibly complex spatial domain inside a bigger domain having a simpler geometry, for example a torus, while keeping the boundary conditions approximately enforced thanks to new terms that are added to the equations [Pesk02]. One particular example is the volume penalization method [ABF99] which, inspired by the physical intuition that a solid wall is similar to a vanishingly porous medium, uses the Brinkman-Darcy drag force as penalization term.
The main advantage of such penalized equations is that they can be discretized independently of the geometry of the original problem, since the latter has been encoded into the penalization terms. Such a simplifcation permits a massive reduction in solver development time, since it avoids the issues associated to the design and management of the grid, allowing for example the use of simple spectral solvers in Cartesian geometries. The gain becomes even more substantial when the geometry is time-dependent, as in the case of moving obstacles [KS09], or when fluid-structure interaction is taken into account.
We present results of a detailed study [NKS12] of the spectral properties of Laplace and Stokes operators, modifed with a volume penalization term designed to approximate Dirichlet conditions in the limit when the penalization parameter, η
, tends to zero. The eigenvalues and eigenfunctions are determined either analytically or numerically as functions of η, both in the continuous case and after applying Fourier or finite difference discretization schemes. For fixed η, we find that only the part of the spectrum corresponding to eigenvalues λ≤η−1 approaches Dirichlet boundary conditions, while the remainder of the spectrum is made of uncontrolled, spurious wall modes. The penalization error for the controlled eigenfunctions is estimated as a function of λ and η. Surprisingly, in the Stokes case, we show that the eigenfunctions approximately satisfy, with a precision O(η), Navier slip boundary conditions with slip length equal to √η. Moreover, for a given discretization, we show that there exists a value of η
, corresponding to a balance between penalization and discretization errors, below which no further gain in precision is achieved. These results shed light on the behavior of volume penalization schemes when solving the Navier-Stokes equations, outline the limitations of the method, and gives indications on how to choose the penalization parameter in practical cases. Possible extensions how to deal with Neumann boundary conditions will also be presented [KKAS12]. Finally, different illustrations will be given for vortex-dipole wall interactions [NFS11], flapping insect wings, fluid-structure interaction [KMFS11] and a dynamical mixer [KKAS12].
Joint work with Romain Nguyen van yen (FU Berlin) and Dmitry Kolomenskiy (McGill, Montreal)
P. Angot, C.-H. Bruneau, and P. Fabrie.
A penalization method to take into account obstacles in incompressible viscous flows.
Num. Math., 81 , 497-520, 1999.
B. Kadoch, D. Kolomenskiy, P. Angot and K. Schneider.
A volume penalization method with moving obstacles for Navier-Stokes with advection-diffusion equations.
J. Comput. Phys., 231(12), 4365-4383, 2012.
D. Kolomenskiy and K. Schneider.
A Fourier spectral method for the Navier-Stokes equations with volume penalization for moving solid obstacles.
J. Comput. Phys., 228, 5687-5709, 2009.
D. Kolomenskiy, H.K. Moffatt, M. Farge and K. Schneider.
Two- and three-dimensional numerical simulations of the clap-fling-sweep of hovering insects.
J. Fluids Struct., 27, 784-791, 2011.
R. Nguyen van yen, M. Farge and K. Schneider.
Energy dissipating structures in the vanishing viscosity limit of planar incompressible flows with solid boundaries.
Phys. Rev. Lett., 106, 184502, 2011.
R. Nguyen van yen, D. Kolomenskiy and K. Schneider.
Approximation of the Laplace and Stokes operators with Dirichlet boundary conditions through volume penalization: A spectral viewpoint.
Preprint, 2012. http://arxiv.org/abs/1206.0002
The immersed boundary method.
Acta Numerica, 11, 479-517, 2002.
Unstructured high order discretization schemes: an enabling technology for reliable LES in an industrial context
Dr. Koen Hillewaert, Cenaero – Simulation technologies for Aeronautics, Gosselies (Belgium)
25 Jun 2013, 17:00–18:30; Location: S4|10-1
The simulation of turbulent flows by the Large Eddy Simulation (LES) or Direct Numerical Simulation (DNS) approaches requires extremely low numerical dispersion and dissipation. Finite Element-like high-order methods such as the discontinuous Galerkin (DGM), spectral difference (SDM) and spectral element methods (SEM), … have attracted considerable interest lately since they seem to bridge the gap between the accuracy of dedicated academic flow solvers and the geometric flexibility of industrial solvers. Currently DGM features amongst the most mature methods in this class. Next to very interesting dispersion and dissipation properties, it furthermore provides computational efficiency and a simple way of checking grid resolution without requiring additional computations. These advantages make DGM a powerful tool for high fidelity simulation of transitional and turbulent flows.
The development of a CFD code based on DGM for industrial LES is discussed, from the assessment for DNS to the preliminary development of adequate subgrid scale (SGS) models for LES. Thereby the code is assessed with respect to academic codes on canonical benchmarks, and applied subsequently to more complex configurations, illustrating the potential of DGM for DNS and LES in industry.
The first part of the talk concerns the assessment for DNS. First of all, a detailed comparison with finite difference methods is performed on the simulation of the transition of the Taylor-Green vortex at Re=1600. A second academic application concerns the 2D periodic hill at Re=2800. These results are put into perspective with respect to the results presented by other authors at the first and second International Workshop on Higher Order Methods for CFD. The method is applied to the DNS of the flow around a low pressure turbine blade.
The second part of the talk discusses the development and assessment of industrially practical LES modeling strategies. Validation is performed on canonical benchmarks such as homogeneous turbulence at very high Reynolds number and channel flow. The focus lies on simple and local models, applicable in complex geometries. Therefore typically dynamic parameter tuning procedures can not be considered, as these require one or more homogeneous directions. Currently the implicit LES (ILES) approach is assessed. This approach supposes that the numerical dissipation of the method is sufficiently targeted to provide an adequate SGS model. The main advantage of this approach is that it does not require tuning or regime-specific calibration. The ILES approach is tested on a transitional airfoil and compared to DNS, showing excellent agreement for a much smaller computational cost.
Emotion Recognition, 3-D Obstacle Detection, and Human Activity Monitoring
Prof. Salim Bouzerdoum, University of Wollongong, Australia
21 Jun 2013, 10:00–11:30; Location: S3|06-257
In this seminar, we will discuss a number of recent advances in visual pattern recognition and image classification. In particular, we will present three applications of visual pattern recognition: to emotion recognition, 3-D vision for assistive navigation, and human activity monitoring using Doppler radar. One of the challenges of visual pattern recognition is robustness to photometric and geometric distortions that occur in uncontrolled natural environments. We present a new paradigm for image recognition which combines feature extraction and classification in one hierarchical image classification architecture, and discuss its applications to face detection, gender recognition and emotion recognition.
Recently, we developed a 3-D vision system as an assistive navigation tool for the blind. The systems uses a 3-D range camera for scene segmentation, pedestrian classification, and object tracking. In this seminar we discuss some results on 3-D object segmentation, localisation and classification. The third part of the seminar presents an image-based approach for classification of micro-Doppler radar signatures. Identification of human activity using Doppler radar is emerging as a very important research area due to its potential civilian and military applications, including surveillance, search and rescue, and health care monitoring. In this seminar we present some results on human motion recognition.
Adaptive Discontinuous Galerkin Methods for nonlinear Diffusion-Convection-Reaction Models
Prof. Dr. Bülent Karasözen, Middle East Technical University, Ankara (Turkey)
20 Jun 2013, 17:00–18:30; Location: S4|10-1
Many engineering problems such as chemical reaction processes, population dynamics, ground water contamination are governed by coupled diffusion-convection-reaction partial differential equations (PDEs) with nonlinear source or sink terms. Nonlinear transport systems are typically convection and/or reaction dominated with characteristic solutions possessing sharp layers. Discontinuous Galerkin methods produce stable solutions to overcome the spurious oscillations for convection dominated problems. In this talk we present the application of adaptive discontinuous Galerkin methods to convection dominated models containing quadratic and Monod type reaction terms. Numerical results for single and coupled systems arising in several applications demonstrate the accuracy and efficiency of the adaptive DGFEM.
Simulation and Optimal Control of Time-Periodic PDE Problems
Prof. Dr. Ulrich Langer, Johannes Kepler University, Linz
19 Jun 2013, 17:15–18:45; Location: S4|10-1
This talk is provided together with the department of mathematics.
Highly accurate impedance boundary conditions for thin conducting sheets
Dr. Kersten Schmidt, TU Berlin
17 Jun 2013, 10:30–12:00; Location: S2|17-103
Shielding sheets are commonly used in the protection of electronic devices. With their large aspect ratios they become a serious issue for the direct application of the finite element method, as many small cells are required to resolve the sheets, as well as the direct application of the boundary element method (BEM) due to the occuring almost singular integrals. Impedance transmission conditions (ITCs), posed on the sheet mid-line Γ, allow for finite element formulations in the exterior of the sheet mid-line or boundary element formulations on this mid-line only. We introduce the ideas behind the commonly used ITCs for the time-harmonic eddy current problem in two dimensions. We show how by an asymptotic analysis that all the classical models (except PEC) are robust with respect to the skin depth or frequency, but all of order 0.
By asymptotic expansions we derive two families of impedance boundary conditions ITC-1-N and ITC-2-N where N denotes the order. The family ITC-1-N is derived for a conductivity or frequency scaled like 1/d and for ITC-2-N it is scaled like 1/d2. We find ITC-1-0 the natural limit for d → 0. For ITC-1-1 and ITC-1-2 are of higher order for low frequencies. The ITC-2-1 are of order 1 independent of the frequency and outform all the classical conditions.
In the second part of the talk we will propose boundary integral equations (BIE) and boundary element methods of the classical and recent impedance transmissions conditions.
Mathematische Bildverarbeitung mittels Compressed Sensing
Prof. Dr. Gitta Kutyniok, TU Berlin
29 May 2013, 17:15–18:45; Location: S2|14-24
Compressed Sensing is a novel research area, which was introduced in 2006, and since then has already become a key concept in various areas of applied mathematics. It predicts that high-dimensional signals, which allow a sparse representation by a suitable basis/frame, can be recovered from highly incomplete linear measurements by using efficient algorithms. In this talk, after first introducing this methodology, we discuss its application to imaging science. We then analyze both theoretically and numerically how this methodology can be utilized to solve the geometric separation problem in modern imaging, namely to separate an image into morphologically distinct components.
This talk is provided together with the department of mathematics.
Finite-Element Simulation of Time-Harmonic Electromagnetic Fields
Prof. Dr. Romanus Dyczij-Edlinger, Universität des Saarlandes, Saarbrücken
27 May 2013, 16:15–17:45; Location: S2|17-103
A broad class of structures in electronics and high-frequency engineering exhibits linear time-invariant system properties. It ranges from antennas and filters through wiring structures of integrated circuits to the cavities of linear accelerators and the elementary cells of meta-materials. Such structures constitute distributed-parameter systems of high complexity, and their characterization usually requires numerical simulation methods at the level of electromagnetic fields.
This presentation gives an overview of typical modelling, solution, and post-processing techniques, based on the method of finite elements in the frequency domain. By reference to a model problem representative for high-frequency engineering, a radiating structure fed by inhomogeneous waveguides, I will review the modal characterization of waveguides, its usage in three-dimensional driven problems, the modelling of concentrated circuit elements and radiation effects, as well as far-fields computation techniques. Moreover, I will discuss the numerical properties of selected formulations and solution methods for large-scale systems of linear equations and algebraic eigenvalue problems. A few remarks on adaptive methods for error control as well as the treatment of parameter-dependent problems conclude the presentation.
Unveiling some Mysteries of Application Performance on Multi-/Manycore Processors
Prof. Dr. Gerhard Wellein, Friedrich-Alexander-University Erlangen
23 May 2013, 17:00–18:30; Location: S4|10-1
The multicore trend has brought forth a variety of computer architectures suitable for numerical simulation. Nowadays there is no longer a single driving force for numerical performance. To make efficient use of the hardware the programmer must address multiple architectural features at the same time – otherwise he will face a performance time warp to the early 2000’s. The talk will give an overview of modern multi-/manycore architectures and point out essential programming issues which are prerequisite to further benefit from advancements in processor technology.
Particles – bridging the gap between solids and fluids
Prof. Dr. Peter Eberhard, University of Stuttgart
14 May 2013, 17:00–18:30; Location: S4|10-1
First, an overview about the research done at the institute will be given to set the framework why we are interested in the computationally so demanding particle methods. These methods have recently emerged as engineering tools that can be widely used in many different disciplines. Due to their meshless nature, they are especially well suited for problems with either many discrete bodies which can move independently or in problems with changing boundaries and topologies. Traditional discrete element approaches (DEM) can deal with millions of particles using appropriate interaction forces and efficient neighborhood search. However, these simulations are not based on continuum mechanics based approaches and require a lot of heuristics and experience. On the other hand, approaches like Smoothed Particle Hydrodynamics (SPH) directly discretize and solve partial differential equations and can be used for both, simulating solids and/or fluids and even mixtures of both. Interestingly, DEM and SPH can efficiently share the same program environment. In this talk, besides an introduction to the main components of particle simulations, also many application examples from solids and fluids will be shown.
Fast Solvers for Emerging Power Systems
Prof. Dr. Domenico Lahaye, TU Delft
15 Apr 2013, 16:15–17:45; Location: S2|17-103
National power grids are currently evolving from static entities, producing mainly a uni-directional ﬂow from generation to loads, to more dynamic and decentralized structures. These emerging power systems should accommodate the local generation by renewable sources and peak demands of electrical vehicle charging. The cross-border interconnection of power grids further imposes new challenges in the design, planning and daily operation of these networks. In this talk we will present recent and ongoing developments in computational methods for these power systems. We will start by presenting results from the recent PhD thesis of Reijer Idema that demonstrate the scalability of a Newton-Krylov solver for the network state equations. Subsequently we will present how this work in continued in the PhD thesis of Martijn de Jong on the contingency analysis of the networks considered. We will conclude the talk by presenting preliminary results of the PhD project of Romain Thomas on the fast computation of fast transients in power systems. We will show how the three PhD projects discussed profit from adaptive multilevel techniques.
Structural optimization of asymmetric brake rotors for the separation of their eigenfrequencies
Andreas Wagner, TU Darmstadt
4 Apr 2013, 17:00; Location: S4|10-1
Suppression of brake squeal has received a lot of attention by engineering researchers in the past. It is widely accepted in the scientific community that the self-excited vibrations induced by the frictional contact between brake disc and brake pad are the main cause of brake squeal. Presently, a variety of methods are being discussed to avoid squealing, e.g. the introduction of damping or the active suppression of the resulting vibrations. Especially in the automotive industry passive measures for squeal avoidance are preferred to active ones due to lower costs and higher reliability. One promising passive approach is the introduction of asymmetry to the brake rotor, which has experimentally and mathematically been proven to avoid squeal.
This talk will focus on the design, modeling and optimization of such asymmetric brake rotors. An asymmetric brake rotor exhibits a separation of its eigenfrequencies known to have a stabilizing effect leading to a reduction in squeal affinity of the brake system. In order to introduce an appropriate separation of eigenfrequencies it is necessary to conduct a structural optimization of the brake disc with the requirement to allow for large changes in the geometry. While many commonly used modeling techniques require frequent remeshing of the structure to be optimized, a new modeling approach, which avoids this cumbersome remeshing process, is presented. This provides an efficient basis for the structural optimization of the brake rotors. Furthermore, a promising comparison between theoretical results and tests with prototype brake discs will be shown. Finally, the talk will be concluded with an overview over further applications of the presented modeling and optimization approach.
Efficient method to model the fluid lag in fluid-driven crack simulations
Prof. Yongxing Shen, Universitat Politecnica de Catalunya, Barcelona
7 Feb 2013, 17:00; Location: S4|10-1
We study the fluid-structure interaction encountered in hydraulic fracturing, a technique used by the oil and gas industry. Simulating this process needs to take into account the interaction between a fracturing solid and the fluid flow in the crack. In particular, with the possible presence of a fluid lag, the evolving fluid front inside the crack and the crack front (crack tip if in 2D) are constrained by a Kuhn-Tucker complementarity condition, which is analogous to the case of contact mechanics. Brute-force methods to keep track of both fronts usually require the use of a costly prediction-correction scheme. In this presentation, we formulate this coupled problem in which the Kuhn-Tucker condition is accommodated through a variational inequality with respect to the liquid pressure. The resulted numerical methods (finite element method or displacement discontinuity method for the solid): (a) allow modeling the evolution of both fronts with a resolution consistent with the mesh size, (b) eliminate the need of explicitly tracking the fluid front, and (c) eliminate the need of switching boundary conditions along different parts of the crack front. All of these lead to substantial saving in computational cost. Numerical examples with a non-propagating fracture are used to verify the proposed methods, for future generalization to propagating fractures.