Together with the Computational Engineering Research Center of the TU Darmstadt a joint seminar with interesting talks in the field of CE is organized in every semester. If you are interested in these seminars and would like to receive invitations please subscribe for the corresponding mailing list.
Transient Electromagnetic Field Coupling with an Airborne Vehicle in the Presence of its Conducting Exhaust Plume
Dr. Sisir Kumar Nayak, Royal Institute of Technology, Stockholm
14 Dec 2009, 15:30; Location: S2|17-103
Airborne vehicle or and its payload are extremely expensive and their loss as a result of a nearby lightning strike is highly undesirable. Even though one could launch only under ideal weather conditions, the cost involved with long launch delays could be very high. So in this age of all weather usage of airborne launch vehicles, it has become necessary to understand the behavior of an airborne vehicle with its exhaust plume when coupled with strong electromagnetic fields generated by a nearby lightning. When the lightning electromagnetic field gets coupled with the vehicle, current is induced on the skin of the vehicle body. The electromagnetic field generated due to the induced current on the skin of the vehicle may get coupled with the internal circuitry through the apertures on the vehicle body. If the coupled electromagnetic energy is more than the damage threshold level of the sensitive devices of the control circuit, they may fail which may lead to aborting the mission or a possible degradation in the vehicle performance. It has been reported that lightning induced phenomena was the cause of malfunctioning as well as aborting of some of the lunar missions. So in the present work, the induced current on the vehicle in the presence of a highly ionized long trailing plume has been computed.
Theoretical analysis is done to estimate the electrical parameters such as electrical conductivity and permittivity and their distribution in the exhaust plume. The electrical conductivity depends on the distribution of the major ionic and neutral species produced from the combustion of the propellant. In addition it also depends on temperature and pressure distribution of the exhaust plume as well as the generated shock wave. The species concentration and other parameters within the combustion chamber up to the nozzle throat point have been studied using chemical kinetics simulation of the combustion process. Computational Fluid Dynamic (CFD) analysis has been carried out to compute the various parameters such as pressure, temperature, and species concentration within the exhaust plume. From the above parameters the electrical conductivity distribution within the plume has been computed and these are used to compute the induced current on the skin of the vehicle by using FDTD and MoM technique.
The presence of exhaust plume is found to enhance the induced current on the body of vehicle by several times and its enhancement is prominent at the tail of the vehicle. Therefore, the present computational results will be useful to study the electromagnetic interference and compatibility (EMI/EMC) aspects of the electronic devices present in the control circuit of the vehicle.
Monaural and binaural beamforming for hearing aids
Dr.-Ing. Henning Puder, Siemens Audiologische Technik
7 Dec 2009, 17:00; Location: S4|10-1
Hearing impaired people suffer from significant problems of understanding speech in noisy environments. The damage of the outer hair cells in the cochlear causes a reduced frequency resolution or higher frequency masking of the auditory perception, respectively. The only possibility to compensate for this effect is to offer people a signal with reduced noise, meaning a signal at a higher signal-to-noise (SNR). Beamforming is currently still the only solution available to objectively enhance speech intelligibility.
In the first part, monaural beamforming techniques will be explained. They combine the signals of two or up to three microphones of one hearing aid in order to amplify signals from a desired direction – usually the front direction – and to attenuate ambient signals. The limited microphone distance and the limited allowed computational complexity are the main challenges for hearing aids applications.
The second part of this paper is dedicated to future binaural applications of beamforming in hearing aids, meaning the design of beamformers based on the combination of the hearing aid microphones of both hearing aids. The concept is to apply a binaural beamformer in order to further reduce interferers and to profit from the larger microphone distance compared to single device applications. Main roadblocks to an application in products are still the wireless data transmission between the hearing aids, head shading, and the generation of a binaural output, i.e. a signal for both ears. Especially binaural cues have to be preserved to allow the hearing aid user to localize sound sources correctly. Different solutions approaches with their specific advantages and disadvantages will be explained.
Turbulence transition in shear flows: What can we learn from pipe flow?
Prof. Dr. Bruno Eckhardt, Philipps-Universität Marburg
4 Nov 2009, 15:15; Location: S1|10-103
According to textbook wisdom, flow down a pipe becomes turbulent near a Reynolds number of about 2000. This simple statement misses many subtleties of the transition: the absence of a linear stability of the laminar flow, the sensitive dependence on perturbations that sometimes succed and sometimes fail to induce turbulence and the unexpected observation that the turbulent state, once achieved, is not persistent but can decay. All these observations are compatible with the formation of a strange saddle in the state space of the system. I will focus on three aspects: on the appearance of 3-d coherent states, on the information contained in lifetime statistics and on results on the boundary between laminar and turbulent regions. They suggest a generic structuring of state space in flows where turbulent and laminar flow coexist, such as plane Couette flow, Poiseuille flow and perhaps even boundary layers.
This talk is provided together with Study Center of Mechanics.
Cognition: The Enabler of New Generation of Engineering Systems
Prof. Dr. Simon Haykin, McMaster University, Hamilton, Canada
14 Sep 2009, 16:00; Location: S3|06-249
When we speak of cognition, we usually think of the human brain: A powerful and highly complex information-processing system. So, I set the stage for my lecture by describing the Action-Perception Cycle that is an inherent characteristic of the brain.
With the Action-Perception Cycle as the focus of attention throughout the lecture, I will then describe four engineering applications of cognition:
Cognitive Mobile Assistant for dealing with information overhead in social networking.
Cognitive Radio for dealing with the spectrum underutilization problem.
Cognitive Tracking Radar for enhancing the resolution of targets beyond the reach of current radar systems.
Cognitive Energy Systems for dealing with the serious shortcomings of today's power grid.
In the course of describing Cognitive Tracking Radars, I will also describe a new generation of nonlinear filters, which we have named “Cubature Kalman Filters” that are the closest approximation to Bayesian Filters in a Gaussian environment.
Mapping and Exploration for Search and Rescue with Humans and Mobile Robots
Dr. Alexander Kleiner, Albert-Ludwigs-Universität Freiburg
11 Sep 2009, 15:15; Location: S2|02-A102 (Robert-Piloty-Gebäude)
Urban Search And Rescue (USAR) is a time critical task since survivors have to be rescued within the first “golden” 72 hours. One goal in Rescue Robotics is to support emergency response by mixed-initiative teams consisting of humans and robots. Their task is to explore the disaster area rapidly while reporting victim locations and hazardous areas to a central station that plans for rescue missions.
To fulfill this task efficiently, humans and robots have to map disaster areas jointly while coordinating their search at the same time. Basically, they have to solve autonomously in realtime the problem of Simultaneous Localization and Mapping (SLAM), consisting of a continuous tracking problem, and a discrete data association problem. However, in disaster areas these problems are extraordinarily challenging.
Following the vision of combined multi-robot and multi-human teamwork, core problems, such as position tracking on rough terrain, mapping by mixed teams, and decentralized team coordination with limited radio communication, are directly addressed by this talk. More specific, I will introduce RFID-SLAM, a method for robust and efficient loop closure in largescale environments that utilizes RFID technology for data association. The method is capable of jointly improving multiple maps from humans and robots in a centralized and decentralized manner without requiring team members to perform loops on their routes. The introduced map representation is further utilized for solving the centralized and decentralized coordination of large rescue teams. On the one hand, a deliberate method for combined task assignment and multi-agent path planning, and on the other hand, a local search method using the memory of RFIDs for coordination, are proposed.
Methods introduced in this talk were extensively evaluated in outdoor environments and official USAR testing arenas designed by the National Institute of Standards and Technology (NIST). Furthermore, some were an integral part of systems that won multiple awards at international competitions, such as the RoboCup world championships.
Extrahieren verborgener Strukturen aus Daten mit latenten Variablen
Dr. Roland Memisevic, University of Toronto, Canada
11 Sep 2009, 08:45; Location: S2|02-A102 (Robert-Piloty-Gebäude)
Latente Variablen haben eine langjährige Geschichte in der Modellierung komplexer statistischer Zusammenhänge. Indem sie versteckte Strukturen in Daten selbständig aufgreifen und repräsentieren können, ermöglichen sie es, erstaunlich elegante und einfache Modelle einer Vielzahl komplexer Phänomene zu generieren. Klassische Beispiele für Verfahren, die auf der Verwendung latenter Variablen beruhen, sind Hidden Markov Modelle, Kalman Filter und die Hauptkomponenten-Analyse. In diesem Vortrag werde ich einige moderne Entwicklungen auf diesem Gebiet beschreiben. Ich werde insbesondere Ansätze zum Schätzen hochgradig nicht-linearer Zusammenhänge beschreiben, sowie Verfahren, die es uns gestatten, benutzergesteuert (z.B. interaktiv) auf das Extrahieren von latenten Strukturen Einfluss zu nehmen. Ich illustriere diese Verfahren anhand von Anwendungen aus der statistischen Vorhersage, der Computer Vision und der Visualisierung.
Scientific Computing on Multi-GPU Systems
Dr. Robert Strzodka, MPI Saarbruecken
29 Jun 2009, 11:40; Location: S3|05-074
The advances in hardware functionality and programmability of graphics processors (GPUs) have greatly increased their appeal as add-on co-processors for scientific computing. However, for large-scale computations a desktop with a single GPU is not sufficient and parallel workstations or even clusters with multiple GPUs must be considered. This talk will address the challenges of these heterogeneous computing systems and in particular discuss the integration of hardware acceleration into parallel large scale software packages without requiring the user to change the existing application code.
This talk is provided together with the Department of Computer Science.
Interactive Texture-based Flow Visualization
Prof. Charles Hansen, SCI Institute, University of Utah
22 Jun 2009, 15:30; Location: S3|05-074
Flow fields play an important role in a wide range of scientific, engineering, and medical disciplines. Due to the advancements in computing technologies and computational fluid dynamics (CFD), recently we have seen a large number of flow datasets with ever increasing size and complexity from numerical fluid simulations. In order to obtain valuable information from these data, it is essential to devise effective computational flow visualization methods. Flow visualization methods can be highly useful to comprehend and analyze these data.The computational cost to generate such an image should not be overly expensive to increase the usefulness of a flow visualization method in a wide range of applications.
In the past few years, the texture advection approach has been the de facto solution for flow visualization in the research community. This approach can be used to realize dense texture visualization and dye advection, where the former is designed to depict instantaneous local features in the entire domain, and the later focuses on highlighting the spatial-temporal relationship between the injection site of the dye material and the rest of the domain. Presented as textures, the resulting visualization from these approaches is considered easy to understand at the cost of elevated computation cost. Since both approaches can be realized as a texture generation process, tremendous performance gains can be obtained by utilizing graphics hardware originally designed for rendering purposes. Due to the difference in design paradigm and hardware constraints, however, many methods proposed by previous research have been focused on performance issues while sacrificing the faithfulness of the resulting visualization.
To tackle this problem, in this talk I will present several accuracy-oriented texture-based flow visualization methods for two-dimensional unsteady flows, unsteady flows on surfaces, and dye advection. Issues regarding the accuracy and faithfulness of the visualization are rigorously treated with algorithmically and physically correct solutions. These schemes are also designed to leverage parallelism that can be accelerated by the current generation of graphics hardware to achieve interactive performance.
This talk is provided together with the Fraunhofer Institute for Computer Graphics Research.
Simulation der Notwasserung von Verkehrsflugzeugen
Prof. Dr.-Ing. Thomas Rung, TU Hamburg-Harburg
17 Jun 2009, 15:15; Location: S1|03-252
Am 15. Januar 2009 gelang dem Piloten einer Maschine vom Typ A320 der US Airways nach dem Ausfall beider Triebwerke eine Notwasserung auf dem Hudson River in New York. Alle 155 Insassen überstanden die Notwasserung nahezu unverletzt. Der Vortrag behandelt einen randständigen Bereich des Flugzeugbaus, der sich mit der Notwasserung und Schwimmfähigkeit von Flugzeugen beschäftigt.
Ein wesentlicher Teil der Flugreise findet über Gewässern und Meeren statt. Bei der Zertifizierung von Luftfahrzeugen muss die Möglichkeit einer erfolgreichen Notwasserung (Ditching) daher gegenüber den Behörden nachgewiesen werden. Im Zentrum des Vortrags steht das Simulationsverfahren 'Ditch', dass an der TUHH in Kooperation mit der Firma AIRBUS zur Unterstützung des Designprozesses und der Zertifizierung entwickelt wurde und hierzu eingesetzt wird. Das Verfahren simuliert den Bewegungsablauf und die Strukturbelastung eines notwassernden Flugzeugs in Abhängigkeit von Anflugbedingungen, Flugzeuggeometrie und Seegang. Es kombiniert einfache strukturmechanische Modelle mit effizienten hydrodynamischen Ansätzen auf der Grundlage der Arbeiten von von Karman und Wagner und semiempirischen fluidmechanischen Modellen.
Im Anschluss an die thematische Motivation werden die Anforderungen an das Simulationsverfahren erläutert. Im Weiteren erfolgt eine Übersicht über die wesentlichen methodischen Grundbausteine des Verfahrens. Die Ergebnisse der Validerung im Vergleich mit Experimenten und aufwendigen CFDSimulationen verdeutlichen abschließend die erzielte Simulationsgüte.
Dieser Vortrag wird zusammen mit dem Studienbereich Mechanik angeboten.
Model predictive control of flows
Prof. Dr. Michael Hinze, University of Hamburg
16 Jun 2009, 17:00; Location: S4|10-1
We exploit ideas from model predictive control to construct nonlinear distributed and boundary controllers for flow governed by the instationary Navier Stokes equations or the Boussinesq approximation. We numerically test the performance of the controllers considering as example control of the wake of the circular cylinder in the laminar flow regime, and thermally driven flow in a cavity.
Excitation-Adaptive FDTD Scheme Design by the Method of Undetermined Coefficients for Specified Spectral Orders of Accuracy
Bezalel Finkelstein, Tel-Aviv University
9 Jun 2009, 10:00; Location: S2|17-103
Some recent results in cooperative wireless relaying
Prof. Dr. Armin Wittneben, Swiss Federal Institute of Technology (ETH), Zurich
4 Jun 2009, 10:00; Location: S3|06-146
Existing wireless networks are designed for coexistance, i.e. essentially for interference avoidance. In contrast cooperative wireless communication intentionally accepts interference to optimize network performance. In this talk we start out with a brief introduction to cooperative wireless
communication and in particular wireless relaying. Subsequently we review three recent research result for relaying in cellular and ad hoc wireless networks:
In cellular systems relaying can improve coverage and reduce the required number of basestations with infrastructure access. Conventional half duplex relaying requires two channel uses from source to destination. Recently two-way relaying has been proposed as an efficient way to recoup this loss. In this framework we consider a novel scheme, which utilizes the typical self interference of two-way relaying for channel estimation. In wireless ad hoc networks coherent multiuser relaying schemes achieve a distributed spatial multiplexing gain by utilizing the relays for distributed interference cancellation. One drawback of these schemes is the large overhead introduced by the dissemination of channel state information. We discuss a new distributed gradient based scheme, which substantially reduces this overhead in two-hop relay networks.
Finally we consider coherent multiuser relaying schemes with multiple relay layers (multihop). Special emphasis is on gain allocation and the diversity-multiplexing tradeoff.
Implicit LES: Theory and Application
Prof. Dr.-Ing. Nikolaus A. Adams, Technische Universität München
20 May 2009, 15:15; Location: S1|03-252
A systematic framework for implicit LES modeling will be presented. Unlike previous approaches to ILES, where given nonlinearly stable discretizations are employed without a priori analysis of the effect of the truncation error, with the proposed framework ALDM (adaptive local deconvolution model) physically consistent modeling is introduced to ILES. With ILES numerical discretization and SGS modeling are fully merged and physical SGS modeling amounts to new and special requirements on the numerical discretization scheme. Basis of the way to solution is to devise a suitable nonlinear discretization environment with sufficient flexibility to allow for modeling. In the presentation we will describe this development, explain and analyse the special constraints for the numerical discretization and show that an implicit model can be derived in a straight-forward way from turbulence theory which results in reliable predictions of a wide range of flows without further ad hoc modifications. Current work is on extending the approach for increasingly complex flows by immersed boundary techniques and wall modeling.
This talk is provided together with Study Center of Mechanics.
Beiträge zu Spektral-Elemente-Methoden und Anwendungen in der Strömungsmechanik
Dr.-Ing. habil. Jörg Stiller, Technische Universität Dresden
13 May 2009, 15:15; Location: S1|03-252
Spektralelementmethoden (SEM) kombinieren die Genauigkeit von Spektralmethoden mit der Flexibilität von Finite-Elemente-Verfahren. Dieser Anspruch lässt sich In der Praxis nicht immer leicht verwirklichen.
Dieser Vortrag startet mit einigen Erfolgsbeispielen aus dem SFB 609 an der TU Dresden.
Der zweite Teil widmet sich der implementierung von nodalen Tetraederelementen mit Fokus auf Symmetriebasierten Faktorisierungstechniken.
Abschließend werden einige Aspekte diskontinuierlicher Galerkinmethoden und deren Anwendung auf aeroakustische Probleme betrachtet.
Dieser Vortrag wird zusammen mit dem Studienbereich Mechanik angeboten.
A Hybrid Optimization Scheme Combining Pattern Search and Gaussian Process
Genetha Gray, PhD, Sandia National Laboratories, Livermore
5 May 2009, 17:00; Location: S4|10-1
Every optimization technique has inherent strengths and weaknesses. Moreover, some optimization algorithms contain characteristics which make them better suited to solve particular kinds of problems. Hybridization, or the combining of two or more complementary, but distinct methods, allows the user to take advantage of the beneficial elements of multiple methods. In this talk, we will describe an algorithm which combines statistical emulation via Gaussian process with pattern search optimization. We will demonstrate the applicability of our hybrid method to a problem of calibrating a computational model of an electrical circuit. In addition, we will describe how the treed Gaussian process can be used as a post-processing tool to increase insight into the problem and discuss the usefulness of hybrid schemes for incorporating uncertainty.
On the Manifold-Mapping Technique for Optimization Problems with Partial Differential Equation Constraints
Prof. Dr. Domenico Lahaye, TU Delft
27 Apr 2009, 16:00; Location: S2|17-103
Optimization problems with partial differential equations constraints are computationally challenging problems that appear in a wide range of engineering applications. Space-mapping techniques allow to speed their solution procedure by extracting information from approximate models such as for instance analytical and lumped parameter approximations. Early variants of the technique are not guaranteed to not converge to a local minimizers. The theoretical framework of defect correct-correction iterations allowed to fix this shortcomming resulting in the manifold-mapping variant. In this talk we will first review the basic principles of the manifold-mapping technique and then illustrate the computational speed-up it delivers in solving design problems in low frequency electromagnetic fields, semiconductors and aerospace applications.
Numerical Investigations for an Extension of FDTD on Tetrahedral Grids
Prof. Lorenzo Codecasa, Ph.D., Politecnico di Milano
16 Apr 2009, 16:00; Location: S2|17-114
The Finite Difference Time Domain (FD-TD) algorithm, originally proposed by K. S. Yee, is one of the most common algorithm for the time-domain simulation of electromagnetic problems. This is essentially due to the fact that the FD-TD algorithm is explicit, so that its computationally complexity and memory requirement are low, and to the fact that the limit to the integration time-step ensuring stability is not severe.
Recently a novel technique has been proposed by the Author for extending the FDTD algorithm to tetrahedral grids, preserving its merits. The performance of the algorithm will be discussed in this talk, in which the results of some numerical investigations will be presented.
Computability and Computational Complexity of/in Physical Theories
Dr. Martin Ziegler, TU Wien
9 Apr 2009, 14:30; Location: S4|10-1
We combine the classical theories of discrete computation (Turing computability and complexity classes like NP) and real number computation (Recursive Analysis and BSS model) with Theoretical Physics in order to
1. investigate physical foundations of computing (Church-Turing Hypothesis) and
2. devise a complexity theory of computer simulation including optimality proofs (upper and lower running time bounds) for simulation algorithms.
Understanding anisotropic mesh adaptation from the perspective of uniform meshes in a metric space: Theory and applications
Prof. Dr. Weizhang Huang, University of Kansas
12 Mar 2009, 17:00; Location: S1|03-123
Anisotropic mesh adaptation has proven to be a useful tool for enhancing accuracy and efficacy in the numerical solution of partial differential equations, especially those exhibiting anisotropic features in their solutions and/or structures. On the other hand, the mathematical characterization of anisotropic meshes has not been well understood. In this talk I will present an approach with which anisotropic meshes are viewed as uniform ones in some metric space. An advantage of this approach is that the simple characterization of uniform meshes will lead to a clear mathematical characterization of adaptive, often non-uniform meshes. As a result, two conditions, the well-known equidistribution condition and a less-known alignment one, are shown to be able to characterize the size and the shape and orientation of elements of anisotropic meshes, respectively.
Applications of the characterization to anisotropic mesh generation and error analysis in finite element computation will be discussed. Numerical examples and applications will be given.
Simulation of microelectronic structures using potentialfieldsolving and optimal parameter setting methods
Wim Schoenmaker, CTO, MAGWEL N.V., Leuven (B)
5 Feb 2009, 16:00; Location: S2|17-103
The simulation of electromagnetic phenomena using field solvers for semiconductor structures demands the use of potentials and vector potentials because the local energy, more than the local force, determines the electron and hole distributions. The ideas underlying the finiteintegration method are applied to the potential fields leading to a discretization scheme that can be viewed as a unification of electromagnetic field (EM) solving with technology computer aided design (TCAD). In this presentation the unification into EMTCAD is reviewed and the newly created software representing this unification is applied to a series of typical design problems that are encountered in the microelectronic industry. Examples are: onchip inductors, onchip capacitors and interconnects, as well as multifinger RFCMOS devices and substratenoise isolation structures. It happens that the iterative process of finding matching simulation results and physcially acceptable technology parameters is a nontrivial task that can be supported extensively by using stateoftheart optimization tools and robustdesign facilities.
Time Domain Electromagnetic Algorithms and Their Application to Photonics
Prof. Trevor Benson, University of Nottingham
29 Jan 2009, 16:00; Location: S2|17-103
Electromagnetic modelling for photonics shares many common problems with other applications. These include the requirement for: multi-scale modelling, requiring an accurate description of small features that can have significant influence on the performance of a significantly larger system; addressing multi-physics problems operating on timescales; highly accurate truncation of the computational workspace; describing the behaviour of various materials; addressing large problem sizes with significant run-times. In this talk we will review the origin of some of these common problems and illustrate a number of techniques that we use to address them including the use of numerical schemes based on unstructured meshes, fine-feature descriptions, and the interfacing of numerical codes with other schemes.
Discontinuous-Galerkin-Verfahren für instationäre Wellen- und Diffusionsprobleme
Prof. Claus-Dieter Munz, Universität Stuttgart
22 Jan 2009, 17:00; Location: S2|17-103
Der Vortrag gibt eine Übersicht über die Entwicklung von Discontinuous-Galerkin-Verfahren für instationäre Probleme. Durch ihre Flexibilität und die Qualität der Näherungslösungen werden diese Verfahren als eine künftige Möglichkeit gesehen, numerische Simulationen in komplexen Geometrien effizienter zu machen. Das DG-Verfahren kann als eine Kombination von einem Finite-Elemente- und einem Finite-Volumen-Verfahren verstanden werden. Wie bei den FV-Verfahren werden Näherungen zugelassen, welche nur stückweise stetig sein müssen, so dass Phänomene approximiert werden können, für die eine stetige Approximation nicht mehr stabil ist. Auf der anderen Seite ist die Näherungsfunktion in jeder Gitterzelle ein Polynom, was die aufwändige Rekonstruktion aus integralen Daten wie bei den FV-Verfahren erspart und was die Erhöhung der Genauigkeitsordnung des Verfahrens einfach macht.
In dem Vortrag wird insbesondere eine Klasse von Verfahren für instationäre Probleme vorgestellt, mit der neben lokaler Verfeinerung und Vergröberung des Gitters auch die Ordnung des Verfahrens in Raum und Zeit angepasst werden kann. Dabei macht es die Lokalität des Verfahrens möglich, dass jede Gitterzelle mit dem optimalen lokalen Zeitschritt vorwärts schreitet. Es werden verschiedene numerische Ergebnisse gezeigt – sowohl für lineare akustische und elektromagnetische Wellenprobleme als auch für nichtlineare Wellen und Diffusion.