
Modellansatz - English episodes only (Gudrun Thäter, Sebastian Ritterbusch)
Explore every episode of Modellansatz - English episodes only
Pub. Date | Title | Duration | |
---|---|---|---|
06 Oct 2016 | Robots | 00:20:36 | |
This is another conversation Gudrun had during the British Applied Mathematics Colloquium which took place 5th – 8th April 2016 in Oxford. Since 2002 Anette Hosoi has been Professor of Mechanical Engineering at MIT (in Cambridge, Massachusetts). She is also a member of the Mathematical Faculty at MIT. After undergraduate education in Princeton she changed to Chicago for a Master's and her PhD in physics. Anette Hosoi wanted to do fluid dynamics even before she had any course on that topic. Then she started to work as Assistant Professor at MIT where everyone wanted to build robots. So she had to find an intersection between fluid and roboters. Her first project were Robo-snailes with her student Brian Chan. Snails move using a thin film of fluid under their foot (and muscles). Since then she has been working on the fascinating boundary of flow and biomechanics. At the BAM Colloquium she was invited for a plenary lecture on "Marine Mammals and Fluid Rectifiers: The Hydrodynamics of Hairy Surfaces". It started with a video of Boston dynamics which showed the terrific abilities some human-like robots have today. Nevertheless, these robots are rigid systems with a finite number of degrees of freedom. Anette Hosoi is working in control and fluid mechanics and got interested in soft systems in the context of robots of a new type. Soft systems are a completely new way to construct robots and for that one has to rethink everything from the bottom up.You are a dreamer she was told for that more than once. For example Octopuses (and snails) move completely different to us and most animals the classcallly designed robots with two, four or more legs copy. At the moment the investigation of those motions is partially triggered by the plausible visualization in computer games and in animated movie sequences. A prominent example for that is the contribution of two mathematicians at UCLA to represent all interactions with snow in the animated movie Frozen. The short verison of their task was to get the physics right when snow falls off trees or people fall into snow - otherwise it just doesn't look right. To operate robots which are not built with mechanical devices but use properties of fluids to move one needs valves and pumps to control flow. They should be cheap and efficient and without any moving parts (since moving parts cause problems). A first famous example for such component is a fluid rectifier which was patented by Nicola Tesla in the 1920ies. His device relied on inertia. But in the small devices as necessary for the new robots there are no inertia. For that Anette Hosoi and her group need to implement new mechnisms. A promising effect is elasticity - especially in channels. Or putting hair on the boundary of channels. Hair can cause asymmetric behaviour in the system. In one direction it bends easily with the flow while in the opposite direction it might hinder flow. While trying to come up with clever ideas for the new type of robots the group found a topic which is present (almost) everywhere in biology - which means a gold mine for research and open questions. Of course hair is interacting with the flow and not just a rigid boundary and one has to admit that in real life applications the related flow area usually is not small (i.e. not negligible in modelling and computations). Mathematically spoken, the model needs a change in the results for the boundary layer. This is clear from the observations and the sought after applications. But it is clear from the mathematical model as well. At the moment they are able to treat the case of low Reynolds number and the linear Stokes equation which of course, is a simplification. But for that case the new boundary conditions are not too complicated and can be treated similar as for porous media (i.e. one has to find an effective permeability). Fortunately even analytic solutions could be calculated. As next steps it would be very interesting to model plunging hairy surfaces into fluids or withdrawing hairy surfaces from fluids (which is even more difficult). This would have a lot of interesting applications and a first question could be to find optimal hair arrangements. This would mean to copy tricks of bat tongues like people at Brown University are doing. References
| |||
07 May 2015 | Waves | 00:48:10 | |
Prof. Enrique Zuazua is a Distinguished Professor of Ikerbasque (Basque Foundation for Science) and Founding Scientific Director at the Basque Center for Applied Mathematics (BCAM), which he pushed into life in 2008. He is also Professor in leave of Applied Mathematics at the Universidad Autónoma de Madrid (UAM) and a Humboldt Awardee at the University of Erlangen-Nuremberg (FAU) as well. He was invited by the PDE-group of our Faculty in Karlsruhe to join our work on Wave Phenomena for some days in May 2015. In our conversation he admits that waves have been holding his interest since his work as a PhD student in Paris at the Université Pierre-et-Marie-Curie in the world famous group of Jacques-Louis Lions. Indeed, waves are everywhere. They are visible in everything which vibrates and are an integral part of life itself. In our work as mathematician very often the task is to influence waves and vibrating structures like houses or antennae such that they remain stable. This leads to control problems like feedback control for elastic materials. In these problems it is unavoidable to always have a look at the whole process. It starts with modelling the problem into equations, analysing these equations (existence, uniqueness and regularity of solutions and well-posedness of the problem), finding the right numerical schemes and validating the results against the process which has been modelled. Very often there is a large gap between the control in the discrete process and the numerical approximation of the model equations and some of these differences are explainable in the framework of the theory for hyperbolic partial differential equations and not down to numerical or calculation errors. In the study of Prof. Zuazua the interaction between the numerical grid and the propagation of waves of different frequencies leads to very intuitive results which also provide clear guidelines what to do about the so-called spurious wave phenomena produced by high frequencies, an example of which is shown in this podcast episode image. This is an inherent property of that sort of equations which are able to model the many variants of waves which exist. They are rich but also difficult to handle. This difficulty is visible in the number of results on existence, uniqueness and regularity which is tiny compared to elliptic and parabolic equations but also in the difficulty to find the right numerical schemes for them. On the other hand they have the big advantage that they are best suited for finding effective methods in massively parallel computers. Also there is a strong connection to so-called Inverse Problems on the theoretical side and through applications where the measurement of waves is used to find oil and water in the ground, e.g (see, e.g. our Podcast Modell004 on Oil Exploration). Prof. Zuazua has a lot of experience in working together with engineers. His first joint project was shape optimization for airfoils. The geometric form and the waves around it interact in a lot of ways and on different levels. Also water management has a lot of interesting and open questions on which he is working with colleagues in Zaragoza. At the moment there is a strong collaboration with the group of Prof. Leugering in Erlangen which is invested in a Transregio research initiative on gasnets which is a fascinating topic ranging from our everyday expectations to have a reliable water and gas supply at home to the latest mathematical research on control. Of course, in working with engineers there is always a certain delay (in both directions) since the culture and the results and questions have to be translated and formulated in a relevant form between engineers and mathematicians. In dealing with theses questions there are two main risks: Firstly, one finds wrong results which are obviously wrong and secondly wrong results which look right but are wrong nonetheless. Here it is the crucial role of mathematicians to have the right framework to find these errors. Prof. Zuazua is a proud Basque. Of the 2.5 Mill. members of the basque people most are living in Spain with a minority status of their culture and language. But since the end of the Franco era this has been translated into special efforts to push culture and education in the region. In less than 40 years this transformed the society immensely and led to modern universities, relevant science and culture which grew out of "nothing". Now Spain and the Basque country have strong bonds to the part of Europe on the other side of the Pyrenees and especially with industry and research in Germany. The Basque university has several campuses and teaches 40.000 students. This success could be a good example how to extend our education system and provide possibilities for young people which is so much a part of our culture in Europe across the boundaries of our continent. | |||
02 Jul 2015 | Acoustic Scattering | 00:40:39 | |
Prof. Francisco Sayas from the Department of Mathematical Sciences of the University of Delaware in Newark has been visiting our faculty in June 2015. He is an expert in the simulation of scattering of transient acoustic waves. Scattering is a phenomenon in the propagation of waves. An interesting example from our everyday experience is when sound waves hit obstacles the wave field gets distorted. So, in a way, we can "hear" the obstacle. Sound waves are scalar, namely, changes in pressure. Other wave types scatter as well but can have a more complex structure. For example, seismic waves are elastic waves and travel at two different speeds as primary and secondary waves. As mathematician, one can abandon the application completely and say a wave is defined as a solution of a Wave Equation. Hereby, we also mean finding these solution in different appropriate function spaces (which represent certain properties of the class of solutions), but it is a very global look onto different wave properties and gives a general idea about waves. The equations are treated as an entity of their own right. Only later in the process it makes sense to compare the results with experiments and to decide if the equations fit or are too simplified. Prof. Sayas startet out in a "save elliptic world" with well-established and classical theories such as the mapping property between data and solutions. But for the study of wave equations, today there is no classical or standard method, but very many different tools are used to find different types of results, such as the preservation of energy. Sometimes it is obvious, that the results cannot be optimal (or sharp) if e.g. properties like convexity of obstacles do not play any role in getting results. And many questions are still wide open. Also, the numerical methods must be well designed. Up to now, transient waves are the most challenging and interesting problem for Prof. Sayas. They include all frequencies and propagate in time. So it is difficult to find the correct speed of propagation and also dispersion enters the configuration. On the one hand, the existence and regularity together with other properties of solutions have to be shown, but on the other hand, it is necessary to calculate the propagation process for simulations - i.e. the solutions - numerically. There are many different numerical schemes for bounded domains. Prof. Sayas prefers FEM and combines them with boundary integral equations as representative for the outer domain effects. The big advantage of the boundary integral representation is that it is physical correct but unfortunately, it is very complicated and all points on the boundary are interconnected. Finite Elements fit well to a black box approach which leads to its popularity among engineers. The regularity of the boundary can be really low if one chooses Galerkin methods. The combination of both tools is a bit tricky since the solver for the Wave Equations needs data on the boundary which it has to get from the Boundary element code and vice versa. Through this coupling it is already clear that in the coding the integration of the different tools is an important part and has to be done in a way that all users of the code which will improve it in the future can understand what is happening. Prof. Sayas is fascinated by his research field. This is also due to its educational aspect: the challenging mathematics, the set of tools still mainly unclear together with the intensive computational part of his work. The area is still wide open and one has to explain mathematics to other people interested in the results. In his carreer he started out with studying Finite Elements at the University in Zaragoza and worked on boundary elements with his PhD-supervisor from France. After some time he was looking for a challenging new topic and found his field in which he can combine both fields. He has worked three years at the University of Minnesota (2007-2010) and decided to find his future at a University in the U.S.. In this way he arrived at the University of Delaware and is very satisfied with the opportunities in his field of research and the chances for young researchers. Literature and additional material
| |||
30 Jul 2015 | Splitting Waves | 00:19:44 | |
To separate one single instrument from the acoustic sound of a whole orchestra- just by knowing its exact position- gives a good idea of the concept of wave splitting, the research topic of Marie Kray. Interestingly, an approach for solving this problem was found by the investigation of side-effects of absorbing boundary conditions (ABC) for time-dependent wave problems- the perfectly matched layers are an important example for ABCs. Marie Kray works in the Numerical Analysis group of Prof. Grote in Mathematical Department of the University of Basel. She did her PhD 2012 in the Laboratoire Jacques-Louis Lions in Paris and got her professional education in Strasbourg and Orsay. Since boundaries occur at the surface of volumes, the boundary manifold has one spatial dimension less than the actual regarded physical domain. Therefore, the treatment of normal derivatives as in the Neumann boundary condition needs special care. The implicit Crank-Nicolson method turned out to be a good numerical scheme for integrating the time derivative, and an upwinding scheme solved the discretized hyperbolic problem for the space dimension. An alternative approach to separate the signals from several point sources or scatterers is to apply global integral boundary conditions and to assume a time-harmonic representation. The presented methods have important applications in medical imaging: A wide range of methods work well for single scatterers, but Tumors often tend to spread to several places. This serverely impedes inverse problem reconstruction methods such as the TRAC method, but the separation of waves enhances the use of these methods on problems with several scatterers. Literature and additional material
| |||
01 Oct 2015 | Nanophotonics | 00:39:26 | |
Nanophotonics is one great path into our future since it renders possible to build e.g. absorber, emitter or amplifier on a scale of a few dozen nanometers. To use this effectively we will have to understand firstly the resonances of plasmons and secondly the interaction of electromagnetic waves with complex media. Here on the one hand we can model light as waves and describe what is happening for the different frequencies of monochromatic light waves. We have to model the evolution in air or in more complex media. On the other hand - taking the more particle centered point of view - we can try to model the reaction of the photons to certain stimuli. The modelling is still in progress and explored in many different ways. The main focus of our guest Claire Scheid who is working on nanophotonics is to solve the corresponding partial differential equations numerically. It is challenging that the nanoscale-photons have to be visible in a discretization for a makro domain. So one needs special ideas to have a geometrical description for changing properties of the material. Even on the fastest available computers it is still the bottleneck to make these computations fast and precise enough. A special property which has to be reflected in the model is the delay in response of a photon to incoming light waves - also depending on the frequency of the light (which is connected to its velocity- also known as dispersion). So an equation for the the evolution of the electron polarization must be added to the standard model (which is the Maxwell system). One can say that the model for the permeability has to take into account the whole history of the process. Mathematically this is done through a convolution operator in the equation. There is also the possibility to explain the same phenomenon in the frequency space as well. In general the work in this field is possible only in good cooperation and interdisciplinary interaction with physicists - which also makes it especially interesting. Since 2009 Claire Scheid works at INRIA méditerranée in Sophia-Antipolis as part of the Nachos-Team and is teaching at the university of Nice as a member of the Laboratoire Dieudonné. She did her studies at the Ecole Normale Superieure in Lyon and later in Paris VI (Université Pierre et Marie Curie). For her PhD she changed to Grenoble and spent two years as Postdoc at the university in Oslo (Norway). Literature and additional material
| |||
29 Oct 2015 | Electrodynamics | 00:29:16 | |
This episode discusses the Born-Infeld model for Electromagnetodynamics. Here, the standard model are the Maxwell equations coupling the interaction of magnetic and electric field with the help of a system of partial differential equations. This is a well-understood classical system. But in this classical model, one serious drawback is that the action of a point charge (which is represented by a Dirac measure on the right-hand side) leads to an infinite energy in the electric field which physically makes no sense. On the other hand, it should be possible to study the electric field of point charges since this is how the electric field is created. One solution for this challenge is to slightly change the point of view in a way similar to special relativity theory of Einstein. There, instead of taking the momentum ( In the electromagnetic model, the Lagrangian would have to restrict the intensity of the fields. This was the idea which Borne and Infeld published already at the beginning of the last century. For the resulting system it is straightforward to calculate the fields for point charges. But unfortunately it is impossible to add the fields for several point charges (no superposition principle) since the resulting theory (and the PDE) are nonlinear. Physically this expresses, that the point charges do not act independently from each other but it accounts for certain interaction between the charges. Probably this interaction is really only important if charges are near enough to each other and locally it should be only influenced by the charge nearest. But it has not been possible to prove that up to now. The electrostatic case is elliptic but has a singularity at each point charge. So no classical regularity results are directly applicable. On the other hand, there is an interesting interplay with geometry since the PDE occurs as the mean curvature equation of hypersurfaces in the Minkowski space in relativity. The evolution problem is completely open. In the static case we have existence and uniqueness without really looking at the PDEs from the way the system is built. The PDE should provide at least qualitative information on the electric field. So if, e.g., there is a positive charge there could be a maximum of the field (for negative charges a minimum - respectively), and we would expect the field to be smooth outside these singular points. So a Lipschitz regular solution would seem probable. But it is open how to prove this mathematically. A special property is that the model has infinitely many inherent scales, namely all even powers of the gradient of the field. So to understand maybe asymptotic limits in theses scales could be a first interesting step. Denis Bonheure got his mathematical education at the Free University of Brussels and is working there as Professor of Mathematics at the moment. Literature and additional material
| |||
10 Dec 2015 | Population Models | 00:22:49 | |
How do populations evolve? This question inspired Alberto Saldaña to his PhD thesis on Partial symmetries of solutions to nonlinear elliptic and parabolic problems in bounded radial domains. He considered an extended Lotka-Volterra models which is describing the dynamics of two species such as wolves in a bounded radial domain: For each species, the model contains the diffusion Starting from an initial condition, a distribution of On a bounded domain, the given boundary conditions are an important aspect for the mathematical model: In this setup, a homogeneous Neumann boundary condition can represent a fence, which no-one, or no wolve, can cross, wereas a homogeneous Dirichlet boundary condition assumes a lethal boundary, such as an electric fence or cliff, which sets the density of living, or surviving, individuals touching the boundary to zero. The initial conditions, that is the distribution of the wolf species, were quite general but assumed to be nearly reflectional symmetric. The analytical treatment of the system was less tedious in the case of Neumann boundary conditions due to reflection symmetry at the boundary, similar to the method of image charges in electrostatics. The case of Dirichlet boundary conditions needed more analytical results, such as the Serrin's boundary point lemma. It turned out, that asymtotically in both cases the two species will separate into two symmetric functions. Here, Saldaña introduced a new aspect to this problem: He let the birth rate, saturation rate and agressiveness rate vary in time. This time-dependence modelled seasons, such as wolves behaviour depends on food availability. The Lotka-Volterra model can also be adapted to a predator-prey setting or a cooperative setting, where the two species live symbiotically. In the latter case, there also is an asymptotical solution, in which the two species do not separate- they stay together. Alberto Saldaña startet his academic career in Mexico where he found his love for mathematical analysis. He then did his Ph.D. in Frankfurt, and now he is a Post-Doc in the Mathematical Department at the University of Brussels. Literature and additional material
| |||
17 Dec 2015 | Transparent Boundaries | 00:26:17 | |
If we are interested in the propagation of waves around a small region of interest, like e.g. an obstacle inside a very big ("unbounded") domain, one way to bring such problems to the computer and solve them numerically is to cut that unbounded domain to a bounded domain. But to have a well-posed problem we have to prescribe boundary conditions on the so-called artificial boundary, which are not inherent in our original problem. This is a classical problem which is not only connected to wave phenomena. Sonia Fliss is interested in so-called transparent boundary conditions. These are the boundary conditions on the artificial boundaries with just the right properties. There are several classical methods like perfectly matched layers (PML) around the region of interest. They are built to absorb incoming waves (complex stretching of space variable). But unfortunately this does not work for non-homogeneous media. Traditionally, also boundary integral equations were used to construct transparent boundary conditions. But in general, this is not possible for anisotropic media (or heterogenous media, e.g. having periodic properties). The main idea in the work of Sonia Fliss is quite simple: She surrounds the region of interest with half spaces (three or more). Then, the solutions in each of these half spaces are determined by Fourier transform (or Floquet waves for periodic media, respectively). The difficulty is that in the overlap of the different half spaces the representations of the solutions have to coincide. Sonia Fliss proposes a method which ensures that this is true (eventually under certain compatibility conditions). The chosen number of half spaces does not change the method very much. The idea is charmingly simple, but the proof that these solutions exist and have the right properties is more involved. She is still working on making the proofs more easy to understand and apply. It is a fun fact, that complex media were the starting point for the idea, and only afterwards it became clear that it also works perfectly well for homogeneous (i.e. much less complex) media. One might consider this to be very theoretical result, but they lead to numerical simulations which match our expectations and are quite impressive and impossible without knowing the right transparent boundary conditions. Sonia Fliss is still very fascinated by the many open theoretical questions. At the moment she is working at Ecole Nationale Supérieure des Techniques avancées (ENSTA) near Paris as Maitre de conférence. Literature and additional material
| |||
24 Mar 2016 | Complex Geometries | 00:32:39 | |
Sandra May works at the Seminar for Applied Mathematics at ETH Zurich and visited Karlsruhe for a talk at the CRC Wave phenomena. Her research is in numerical analysis, more specifically in numerical methods for solving PDEs. The focus is on hyperbolic PDEs and systems of conservation laws. She is both interested in theoretical aspects (such as proving stability of a certain method) and practical aspects (such as working on high-performance implementations of algorithms). Sandra May graduated with a PhD in Mathematics from the Courant Institute of Mathematical Sciences (part of New York University) under the supervision of Marsha Berger. She likes to look back on the multicultural working and learning experience there. We talked about the numerical treatment of complex geometries. The main problem is that it is difficult to automatically generate grids for computations on the computer if the shape of the boundary is complex. Examples for such problems are the simulation of airflow around airplanes, trucks or racing cars. Typically, the approach for these flow simulations is to put the object in the middle of the grid. Appropriate far-field boundary conditions take care of the right setting of the finite computational domain on the outer boundary (which is cut from an infinite model). Typically in such simulations one is mainly interested in quantities close to the boundary of the object. Instead of using an unstructured or body-fitted grid, Sandra May is using a Cartesian embedded boundary approach for the grid generation: the object with complex geometry is cut out of a Cartesian background grid, resulting in so called cut cells where the grid intersects the object and Cartesian cells otherwise. This approach is fairly straightforward and fully automatic, even for very complex geometries. The price to pay comes in shape of the cut cells which need special treatment. One particular challenge is that the cut cells can become arbitrarily small since a priori their size is not bounded from below. Trying to eliminate cut cells that are too small leads to additional problems which conflict with the goal of a fully automatic grid generation in 3d, which is why Sandra May keeps these potentially very small cells and develops specific strategies instead. The biggest challenge caused by the small cut cells is the small cell problem: easy to implement (and therefore standard) explicit time stepping schemes are only stable if a CFL condition is satisfied; this condition essentially couples the time step length to the spatial size of the cell. Therefore, for the very small cut cells one would need to choose tiny time steps, which is computationally not feasible. Instead, one would like to choose a time step appropriate for the Cartesian cells and use this same time step on cut cells as well. Sandra May and her co-workers have developed a mixed explicit implicit scheme for this purpose: to guarantee stability on cut cells, an implicit time stepping method is used on cut cells. This idea is similar to the approach of using implicit time stepping schemes for solving stiff systems of ODEs. As implicit methods are computationally more expensive than explicit methods, the implicit scheme is only used where needed (namely on cut cells and their direct neighbors). In the remaining part of the grid (the vast majority of the grid cells), a standard explicit scheme is used. Of course, when using different schemes on different cells, one needs to think about a suitable way of coupling them. The mixed explicit implicit scheme has been developed in the context of Finite Volume methods. The coupling has been designed with the goals of mass conservation and stability and is based on using fluxes to couple the explicit and the implicit scheme. This way, mass conservation is guaranteed by construction (no mass is lost). In terms of stability of the scheme, it can be shown that using a second-order explicit scheme coupled to a first-order implicit scheme by flux bounding results in a TVD stable method. Numerical results for coupling a second-order explicit scheme to a second-order implicit scheme show second-order convergence in the L^1 norm and between first- and second-order convergence in the maximum norm along the surface of the object in two and three dimensions. We also talked about the general issue of handling shocks in numerical simulations properly: in general, solutions to nonlinear hyperbolic systems of conservation laws such as the Euler equations contain shocks and contact discontinuities, which in one dimension express themselves as jumps in the solution. For a second-order finite volume method, typically slopes are reconstructed on each cell. If one reconstructed these slopes using e.g. central difference quotients in one dimension close to shocks, this would result in oscillations and/or unphysical results (like negative density). To avoid this, so called slope limiters are typically used. There are two main ingredients to a good slope limiter (which is applied after an initial polynomial based on interpolation has been generated): first, the algorithm (slope limiter) needs to detect whether the solution in this cell is close to a shock or whether the solution is smooth in the neighborhood of this cell. If the algorithm thinks that the solution is close to a shock, the algorithm reacts and adjusts the reconstruted polynomial appropriately. Otherwise, it sticks with the polynomial based on interpolation. One commonly used way in one dimension to identify whether one is close to a shock or not is to compare the values of a right-sided and a left-sided difference quotient. If they differ too much the solution is (probably) not smooth there. Good reliable limiters are really difficult to find. Literature and additional material
| |||
14 Apr 2016 | Crop Growth | 00:20:47 | |
This is the first of four conversation Gudrun had during the British Applied Mathematics Colloquium which took place 5th – 8th of April 2016 in Oxford. Josie Dodd finished her Master's in Mathematical and Numerical Modelling of the Atmosphere and Oceans at the University of Reading. In her PhD project she is working in the Mathematical Biology Group inside the Department of Mathematics and Statistics in Reading. In this group she develops models that describe plant and canopy growth of the Bambara Groundnut - especially the plant interaction when grown as part of a crop. The project is interdisciplinary and interaction with biologists is encouraged by the funding entity. Why is this project so interesting? In general, the experimental effort to understand crop growth is very costly and takes a lot of time. So it is a great benefit to have cheaper and faster computer experiments. The project studies the Bambara Groundnut since it is a candidate for adding to our food supply in the future. It is an remarkably robust crop, draught tolerant and nitrogent inriching, which means the production of yield does not depend on fertilizer. The typical plant grows 150 days per year. The study will find results for which verfication and paramater estimations from actual green house data is available. On the other hand, all experience on the modelling side will be transferable to other plants up to a certain degree. The construction of the mathematical model includes finding equations which are simple enough but cover the main processes as well as numerical schemes which solve them effectively. At the moment, temperature and solar radiation are the main input to the model. In the future, it should include rain as well. Another important parameter is the placement of the plants - especially in asking for arrangements which maximize the yield. Analyzing the available data from the experimental partners leads to three nonlinear ODEs for each plant. Also, the leave production has a Gaussian distribution relationship with time and temperature. The results then enter the biomass equation. The growth process of the plant is characterized by a change of the rate of change over time. This is a property of the plant that leads to nonlinearity in the equations. Nevertheless, the model has to stay as simple as possible, while firstly, bridging the gap to complicated and more precise models, and secondly, staying interpretable to make people able to use it and understand its behaviour as non-mathematicians. This is the main group for which the models should be a useful tool. So far, the model for interaction with neighbouring plants is the computational more costly part, where - of course - geometric consideration of overlapping have to enter the model. Though it does not yet consider many plants (since green house sized experimental data are available) the model scales well to a big number of plants due to its inherent symmetries. Since at the moment the optimizaition of the arrangements of plants has a priority - a lot of standardization and simplifying assumptions are applied. So for the future more parameters such as the input of water should be included, and it would be nice to have more scales. Such additional scales would be to include the roots system or other biological processes inside the plant. Of course, the green house is well controlled and available field data are less precise due to the difficulty of measurements in the field. During her work on the project and as a tutor Josie Dodd found out that she really likes to do computer programming. Since it is so applicable to many things theses skills open a lot of doors. Therefore, she would encourage everybody to give it a try. Literature and additional material
| |||
05 May 2016 | Viscoelastic Fluids | 00:21:11 | |
This is the second of four conversations Gudrun had during the British Applied Mathematics Colloquium which took place 5th – 8th of April 2016 in Oxford. Helen Wilson always wanted to do maths and had imagined herself becoming a mathematician from a very young age. But after graduation she did not have any road map ready in her mind. So she applied for jobs which - due to a recession - did not exist. Today she considers herself lucky for that since she took a Master's course instead (at Cambridge University), which hooked her to mathematical research in the field of viscoelastic fluids. She stayed for a PhD and after that for postdoctoral work in the States and then did lecturing at Leeds University. Today she is a Reader in the Department of Mathematics at University College London. So what are viscoelastic fluids? If we consider everyday fluids like water or honey, it is a safe assumption that their viscosity does not change much - it is a material constant. Those fluids are called Newtonian fluids. All other fluids, i.e. fluids with non-constant viscosity or even more complex behaviours, are called non-Newtonian and viscoelastic fluids are a large group among them. Already the name suggests, that viscoelastic fluids combine viscous and elastic behaviour. Elastic effects in fluids often stem from clusters of particles or long polymers in the fluid, which align with the flow. It takes them a while to come back when the flow pattern changes. We can consider that as keeping a memory of what happened before. This behaviour can be observed, e.g., when stirring tinned tomato soup and then waiting for it to go to rest again. Shortly before it finally enters the rest state one sees it springing back a bit before coming to a halt. This is a motion necessary to complete the relaxation of the soup. Another surprising behaviour is the so-called Weissenberg effect, where in a rotation of elastic fluid the stretched out polymer chains drag the fluid into the center of the rotation. This leads to a peak in the center, instead of a funnel which we expect from experiences stirring tea or coffee. The big challenge with all non-Newtonian fluids is that we do not have equations which we know are the right model. It is mostly guess work and we definitely have to be content with approximations. The simplest models then take the so-called retarded fluid assumption, i.e. the elastic properties are considered to be only weak. Then, one can expand around the Newtonian model as a base state. Of course there are a plethora of interesting questions connected to complex fluids. The main question in the work of Helen Wilson is the stability of the flow of those fluids in channels, i.e. how does it react to small perturbations? Do they vanish in time or could they build up to completely new flow patterns? In 1999, she published results of her PhD thesis and predicted a new type of instability for a shear-thinning material model. It was to her great joy when in 2013 experimentalists found flow behaviour which could be explained by her predicted instability. More precisely, in the 2013 experiments a dilute polymer solution was sent through a microchannel. The material model for the fluid is shear thinning as in Helen Wilson's thesis. They observed oscillations from side to side of the channel and surprising noise in the maximum flow rate. This could only be explained by an instability which they did not know about at that moment. In a microchannel inertia is negligible and the very low Reynolds number of Of course, even for the easiest non-linear models one arrives at highly non-linear equations. In order to analyse stability of solutions to them one firstly needs to know the corresponding steady flow. Fortunately, if starting with the easiest non-linear models in a channel one can still find the steady flow as an analytic solution with paper and pencil since one arrives at a 1D ODE, which is independent of time and one of the two space variables. The next question then is: How does it respond to small perturbation? The classical procedure is to linearize around the steady flow which leads to a linear problem to solve in order to know the stability properties. The basic (steady) flow allows for Fourier transformation which leads to a problem with two scalar parameters - one real and one complex. The general structure is an eigenvalue problem which can only be solved numerically. After we know the eigenvalues we know about the (so-called linear) stability of the solution. An even more interesting research area is so-called non-linear stability. But it is still an open field of research since it has to keep the non-linear terms. The difference between the two strategies (i.e. linear and non-linear stability) is that the linear theory predicts instability to the smallest perturbations but the non-linear theory describes what happens after finite-amplitude instability has begun, and can find larger instability regions. Sometimes (but unfortunately quite rarely) both theories find the same point and we get a complete picture of when a stable region changes into an unstable one. One other really interesting field of research for Helen Wilson is to find better constitutive relations. Especially since the often used power law has inbuilt unphysical behaviour (which means it is probably too simple). For example, taking a power law with negative exponent says that In the middle of the flow there is a singularity (we would divide by zero) and perturbations are not able to cross the center line of a channel. Also, it is unphysical that according to the usual models the shear-thinning fluid should be instantly back to a state of high viscosity after switching off the force. For example most ketchup gets liquid enough to serve it only when we shake it. But it is not instantly thick after the shaking stops - it takes a moment to solidify. This behaviour is called thixotropy. Literature and additional material
| |||
02 Jun 2016 | Banach-Tarski Paradox | 00:27:47 | |
Nicolas Monod teaches at the École polytechnique fédérale in Lausanne and leads the Ergodic and Geometric Group Theory group there. In May 2016 he was invited to give the Gauß lecture of the German Mathematical Society (DMV) at the Technical University in Dresden. He presented 100 Jahre Zweisamkeit – The Banach-Tarski Paradox. The morning after his lecture we met to talk about paradoxes and hidden assumptions our mind makes in struggling with geometrical representations and measures. A very well-known game is Tangram. Here a square is divided into seven pieces (which all are polygons). These pieces can be rearranged by moving them around on the table, e.g.. The task for the player is to form given shapes using the seven pieces – like a cat etc.. Of course the Tangram cat looks more like a flat Origami-cat. But we could take the Tangram idea and use thousands or millions of little pieces to build a much more realistic cat with them – as with pixels on a screen. In three dimensions one can play a similar game with pieces of a cube. This could lead to a LEGO-like three-dimensional cat for example. In this traditional Tangram game, there is no fundamental difference between the versions in dimension two and three. But in 1914 it was shown that given a three-dimensional ball, there exists a decomposition of this ball into a finite number of subsets, which can then be rearranged to yield two identical copies of the original ball. This sounds like a magical trick – or more scientifically said – like a paradoxical situation. It is now known under the name Banach-Tarski paradox. In his lecture, Nicolas Monod dealt with the question: Why are we so surprised about this result and think of it as paradoxical? One reason is the fact that we think to know deeply what we understand as volume and expect it to be preserved under rearrangements (like in the Tangram game, e.g.).Then the impact of the Banach-Tarski paradox is similar for our understanding of volume to the shift in understanding the relation between time and space through Einstein's relativity theory (which is from about the same time). In short the answer is: In our every day concept of volume we trust in too many good properties of it. It was Felix Hausdorff who looked at the axioms which should be valid for any measure (such as volume). It should be independent of the point in space where we measure (or the coordinate system) and if we divide objects, it should add up properly. In our understanding there is a third hidden property: The concept "volume" must make sense for every subset of space we choose to measure. Unfortunately, it is a big problem to assign a volume to any given object and Hausdorff showed that all three properties cannot all be true at the same time in three space dimensions. Couriously, they can be satisfied in two dimensions but not in three. Of course, we would like to understand why there is such a big difference between two and three space dimensions, that the naive concept of volume breaks down by going over to the third dimension. To see that let us consider motions. Any motion can be decomposed into translations (i.e. gliding) and rotations around an arbitrarily chosen common center. In two dimensions the order in which one performs several rotations around the same center does not matter since one can freely interchange all rotations and obtains the same result. In three dimensions this is not possible – in general the outcomes after interchanging the order of several rotations will be different. This break of the symmetry ruins the good properties of the naive concept of volume. Serious consequences of the Banach-Tarski paradox are not that obvious. Noone really duplicated a ball in real life. But measure theory is the basis of the whole probability theory and its countless applications. There, we have to understand several counter-intuitive concepts to have the right understanding of probabilities and risk. More anecdotally, an idea of Bruno Augenstein is that in particle physics certain transformations are reminiscent of the Banach-Tarski phenomenon. Nicolas Monod really enjoys the beauty and the liberty of mathematics. One does not have to believe anything without a proof. In his opinion, mathematics is the language of natural sciences and he considers himself as a linguist of this language. This means in particular to have a closer look at our thought processes in order to investigate both the richness and the limitations of our models of the universe. References:
| |||
13 Oct 2016 | Crime Prevention | 00:40:34 | |
This is the last of four conversation Gudrun had during the British Applied Mathematics Colloquium which took place 5th – 8th April 2016 in Oxford. Andrea Bertozzi from the University of California in Los Angeles (UCLA) held a public lecture on The Mathematics of Crime. She has been Professor of Mathematics at UCLA since 2003 and Betsy Wood Knapp Chair for Innovation and Creativity (since 2012). From 1995-2004 she worked mostly at Duke University first as Associate Professor of Mathematics and then as Professor of Mathematics and Physics. As an undergraduate at Princeton University she studied physics and astronomy alongside her major in mathematics and went through a Princeton PhD-program. For her thesis she worked in applied analysis and studied fluid flow. As postdoc she worked with Peter Constantin at the University of Chicago (1991-1995) on global regularity for vortex patches. But even more importantly, this was the moment when she found research problems that needed knowledge about PDEs and flow but in addition both numerical analysis and scientific computing. She found out that she really likes to collaborate with very different specialists. Today hardwork can largely be carried out on a desktop but occasionally clusters or supercomputers are necessary. The initial request to work on Mathematics in crime came from a colleague, the social scientist Jeffrey Brantingham. He works in Anthropology at UCLA and had well established contacts with the police in LA. He was looking for mathematical input on some of his problems and raised that issue with Andrea Bertozzi. Her postdoc George Mohler came up with the idea to adapt an earthquake model after a discussion with Frederic Paik Schoenberg, a world expert in that field working at UCLA. The idea is to model crimes of opportunity as being triggered by crimes that already happend. So the likelihood of new crimes can be predicted as an excitation in space and time like the shock of an earthquake. Of course, here statistical models are necessary which say how the excitement is distributed and decays in space and time. Mathematically this is a self-exciting point process. The traditional Poisson process model has a single parameter and thus, no memory - i.e. no connections to other events can be modelled. The Hawkes process builds on the Poisson process as background noise but adds new events which then are triggering events according to an excitation rate and the exponential decay of excitation over time. This is a memory effect based on actual events (not only on a likelihood) and a three parameter model. It is not too difficult to process field data, fit data to that model and make an extrapolation in time. Meanwhile the results of that idea work really well in the field. Results of field trials both in the UK and US have just been published and there is a commercial product available providing services to the police. In addition to coming up with useful ideas and having an interdisciplinary group of people committed to make them work it was necessery to find funding in order to support students to work on that topic. The first grant came from the National Science Foundation and from this time on the group included George Tita (UC Irvine) a criminology expert in LA-Gangs and Lincoln Chayes as another mathematician in the team. The practical implementation of this crime prevention method for the police is as follows: Before the policemen go out on a shift they ususally meet to divide their teams over the area they are serving. The teams take the crime prediction for that shift which is calculated by the computer model on the basis of whatever data is available up to shift. According to expected spots of crimes they especially assign teams to monitor those areas more closely. After introducing this method in the police work in Santa Cruz (California) police observed a significant reduction of 27% in crime. Of course this is a wonderful success story. Another success story involves the career development of the students and postdocs who now have permanent positions. Since this was the first group in the US to bring mathematics to police work this opened a lot of doors for young people involved. Another interesting topic in the context of Mathematics and crime are gang crime data. As for the the crime prediction model the attack of one gang on a rival gang usually triggers another event soon afterwards. A well chosen group of undergraduates already is mathematically educated enough to study the temporary distribution of gang related crime in LA with 30 street gangs and a complex net of enemies. We are speaking about hundreds of crimes in one year related to the activity of gangs. The mathematical tool which proved to be useful was a maximum liklihood penalization model again for the Hawkes process applied on the expected retaliatory behaviour. A more complex problem, which was treated in a PhD-thesis, is to single out gangs which would be probably responsable for certain crimes. This means to solve the inverse problem: We know the time and the crime and want to find out who did it. The result was published in Inverse Problems 2011. The tool was a variational model with an energy which is related to the data. The missing information is guessed and then put into the energy . In finding the best guess related to the chosen energy model a probable candidate for the crime is found. For a small number of unsolved crimes one can just go through all possible combinations. For hundreds or even several hundreds of unsolved crimes - all combinations cannot be handled. We make it easier by increasing the number of choices and formulate a continuous instead of the discrete problem, for which the optimization works with a standard gradient descent algorithm. A third topic and a third tool is Compressed sensing. It looks at sparsitiy in data like the probability distribution for crime in different parts of the city. Usually the crime rate is high in certain areas of a city and very low in others. For these sharp changes one needs different methods since we have to allow for jumps. Here the total variation enters the model as the When Andrea Bertozzi was a young child she spent a lot of Sundays in the Science museum in Boston and wanted to become a scientist when grown up. The only problem was, that she could not decide which science would be the best choice since she liked everything in the museum. Today she says having chosen applied mathematics indeed she can do all science since mathematics works as a connector between sciences and opens a lot of doors. References
Examples for work of undergraduates
Publications of A. Bertozzi and co-workers on Crime prevention
Related PodcastsBritish Applied Mathematics Colloquium 2016 Special
| |||
03 Nov 2016 | Filters | 00:28:03 | |
Liliana de Luca Xavier Augusto is PhD student of chemical engineering at the Federal University of São Carlos in Brasil. She spent one year of her PhD (October 2015-2016) at the KIT in Karlsruhe to work with the group developing the software OpenLB at the Mathematical Department and the Department of Chemical Engineering. Liliana Augusto investigates filtering devices which work on a micro ( Lattice Boltzman methods are not very prominent in Brasil. She was looking for suitable partners and found the development group around OpenLB who had co-operations with Brazil. She tried to apply the software on the problem, and she found out about the possibility to work in Germany through a program of the Brasilian government. It is not so common to go abroad as a PhD-student in Brazil. She learnt a lot not only in an academical manner but highly recommends going abroad to experience new cultures as well. She does not speak German- everything, from looking for partners to arriving in Germany, happened so fast that she could not learn the language beforehand. At the university, English was more than sufficient for scientific work, but she had difficulties finding a place to stay. In the end, she found a room in a student dorm with German students and a few other international students. References
| |||
01 Dec 2016 | Homogenization | 00:56:00 | |
Andrii Khrabustovskyi works at our faculty in the group Nonlinear Partial Differential Equations and is a member of the CRC Wave phenomena: analysis and numerics. He was born in Kharkiv in the Ukraine and finished his studies as well as his PhD at the Kharkiv National University and the Institute for Low Temperature Physics and Engineering of the National Academy of Sciences of Ukraine. He joined our faculty in 2012 as postdoc in the former Research Training Group 1294 Analysis, Simulation and Design of Nanotechnological Processes, which was active until 2014. Gudrun Thäter talked with him about one of his research interests Asymptotic analysis and homogenization of PDEs. Photonic crystals are periodic dielectric media in which electromagnetic waves from certain frequency ranges cannot propagate. Mathematically speaking this is due to gaps in the spectrum of the related differential operators. For that an interesting question is if there are gaps inbetween bands of the spectrum of operators related to wave propagation, especially on periodic geometries and with periodic coeffecicients in the operator. It is known that the spectrum of periodic selfadjoint operators has bandstructure. This means the spectrum is a locally finite union of compact intervals called bands. In general, the bands may overlap and the existence of gaps is therefore not guaranteed. A simple example for that is the spectrum of the Laplacian in Homogenization is a collection of mathematical tools which are applied to media with strongly inhomogeneous parameters or highly oscillating geometry. Roughly spoken the aim is to replace the complicated inhomogeneous by a simpler homogeneous medium with similar properties and characteristics. In our case we deal with PDEs with periodic coefficients in a periodic geometry which is considered to be infinite. In the limit of a characteristic small parameter going to zero it behaves like a corresponding homogeneous medium. To make this a bit more mathematically rigorous one can consider a sequence of operators with a small parameter (e.g. concerning cell size or material properties) and has to prove some properties in the limit as the parameter goes to zero. The optimal result is that it converges to some operator which is the right homogeneous one. If this limit operator has gaps in its spectrum then the gaps are present in the spectra of pre-limit operators (for small enough parameter). The advantages of the homogenization approach compared to the classical one with Floquet Bloch theory are:
An interesting geometry in this context is a domain with periodically distributed holes. The question arises: what happens if the sizes of holes and the period simultaneously go to zero? The easiest operator which we can study is the Laplace operator subject to the Dirichlet boundary conditions. There are three possible regimes:
A traditional ansatz in homogenization works with the concept of so-called slow and fast variables. The name comes from the following observation. If we consider an infinite layer in cylindrical coordinates, then the variable r measures the distance from the origin when going "along the layer", There are many more tools available like the technique of Tartar/Murat, who use a weak formulation with special test functions depending on the small parameter. The weak point of that theory is that we first have to know the resulat as the parameter goes to zero before we can to construct the test function. Also the concept of Gamma convergence or the unfolding trick of Cioranescu are helpful. An interesting and new application to the mathematical results is the construction of wave guides. The corresponding domain in which we place a waveguide is bounded in two directions and unbounded in one (e.g. an unbounded cylinder). References
| |||
22 Dec 2016 | Julia Sets | 00:58:09 | |
Pascal Kraft is a researcher at the Institute for Applied and Numerical Mathematics of the Karlsruhe Institute of Technology (KIT) and he introduces us to Julia Sets which he investigated for his Bachelors Thesis. It is natural for us to think something like this: If I take two simple things and put them together in some sense, nothing too complex should arise from that. A fascinating result of the work of mathematicians like Gaston Julia and Benoît Mandelbrot dating back to the first half of the 20th century show that this assumption doesn't always hold. In his bachelor's thesis under supervision of Jan-Philipp Weiß, Pascal Kraft worked on the efficient computation of Julia Sets. In laymans terms you can describe these sets as follows: Some electronic calculators have the functions of repeating the last action if you press "=" or "enter" multiple times. So if you used the root function of your calculator on a number and now you want the root of the result you simply press "=" again. Now imagine you had a function on your calculater that didn't only square the input but also added a certain value - say 0.5. Then you put in a number, apply this function and keep repeating it over and over again. Now you ask yourself if you keep pressing the "="-button if the result keeps on growing and tends to infinity or if it stays below some threshold indefinitely. Using real numbers this concept is somewhat boring but if we use complex numbers we find, that the results are astonishing. To use a more precise definition: for a function The results, however, turn out to be surprising and worth the effort. The geometric representations - images - of filled Julia Sets turn out to be very aesthetically pleasing since they are no simple compositions of elementary shapes but rather consist of intricate shapes and patterns. The reason for these beautiful shapes lie in the nature of multiplication and addition on the complex plane: A multiplication can be a magnification and down-scaling, mirroring and rotation, whereas the complex addition is represented by a translation on the complex plane. Since the function is applied over and over again, the intrinsic features are repeated in scaled and rotated forms over and over again, and this results in a self-similarity on infinite scales. In his bachelor's thesis, Pascal focussed on the efficient computation of such sets which can mean multiple things: it can either mean that the goal was to quickly write a program which could generate an image of a Julia Set, or that a program was sought which was very fast in computing such a program. Lastly it can also mean that we want to save power and seek a program which uses computational power efficiently to compute such an image, i.e. that consumes little energy. This is a typical problem when considering a numerical approach in any application and it arises very naturally here: While the computation of Julia Sets can greatly benefit from parallelization, the benefits are at loss when many tasks are waiting for one calculation and therefore the speedup and computational efficiency breaks down due to Amdahl's law. The difference of these optimization criteria becomes especially obvious when we want to do further research ontop of our problem solver that we have used so far. The Mandelbrot Set for example is the set of values Since the computation of a Julia Set can even be done in a webbrowser these days, we include below a little tool which lets you set a complex parameter References and further reading
| |||
19 Jan 2017 | Rage of the Blackboard | 00:42:21 | |
Constanza Rojas-Molina is a postdoc at the Institute of Applied Mathematics of the University of Bonn. Gudrun Thäter met her in Bonn to talk about Constanza's blog The Rage of the Blackboard. The blog’s title makes reference to an angry blackboard, but also to the RAGE Theorem, named after the mathematical physicists D. Ruelle, W. Amrein, V. Georgescu, and V. Enss." Standing at a blackboard can be intimidating and quite a few might remember moments of anxiety when being asked to develop an idea in front of others at the blackboard. But as teachers and scientists we work with the blackboard on a daily basis and find a way to "tame" its "rage". Gudrun and Constanza share that they are working in fields of mathematics strongly intertwined with physics. While Gudrun is interested in Mathematical Fluid dynamics, Constanza's field is Mathematical physics. Results in both fields very much rely on understanding the spectrum of linear (or linearized) operators. In the finite-dimensional case this means to study the eigenvalues of a matrix. They contain the essence of the action of the operator - represented by different matrices in differing coordinate systems. As women in academia and as female mathematicians Gudrun and Constanza share the experience that finding the essence of their actions in science and defining the goals worth to pursue are tasks as challenging as pushing science itself, since most traditional coordinate systems were made by male colleagues and do not work in the same way for women as for men. This is true even when raising own children does not enter the equation. For that Constanza started to reach out to women in her field to speak about their mathematical results as well as their experiences. Her idea was to share the main findings in her blog with an article and her drawings. When reaching out to a colleague she sends a document explaining the goal of the project and her questions in advance. Constanza prepares for the personal conversation by reading up about the mathematical results. But at the same moment she is interested in questions like: how do you work, how do you come up with ideas, what do you do on a regular day, etc. The general theme of all conversations is that a regular day does not exist when working at university. It seems that the only recurring task is daily improvisation on any schedule made in advance. One has to optimize how to live with the peculiar situation being pushed to handle several important tasks at once at almost any moment and needs techniques to find compromise and balance. An important question then is: how to stay productive and satisfied under these conditions, how to manage to stay in academia and what personal meaning does the word success then take. In order to distill the answers into a blog entry Constanza uses only a few quotes and sums up the conversation in a coherent text. Since she seeks out very interesting people, there is a lot of interesting material. Constanza focuses on the aspects that stay with her after a longer thought process. These ideas then mainly drive the blog article. Another part of the blog are two drawings: one portrait of the person and one which pictures the themes that were discussed and might not have made it into the text. Surprisingly it turned out to be hard to find partners to talk to, and the process to make it a blog entry takes Constanza a year or longer. On the other hand, she feels very lucky that she found women which were very generous with their time and in sharing their experiences. Besides the engagement and love for what they do, all the participants had this in common: they were already promoting the participation of women in science. To learn from them as a younger researcher means, for example, to see the own impact on students and that building a community is very important, and a success in its own. Though Constanza invests a lot of time in the blog project, it is worth the effort since it helps her to work towards a future either in or outside academia. Gudrun and Constanza found out that though both of their projects explore mathematical themes as well as people working in mathematics, the written parts of blog and podcast differ in that what makes it into the notes in Constanza's blog is, so to say, bonus material available only for the listening audience in Gudruns podcast (since it is never in the shownotes). In that sense, Gudrun's podcast and Constanza's blog are complementary views on the life of researchers. Constanza did her undergraduate studies in La Serena in Chile. She started out with studying physics but soon switched to mathematics in order to understand the basics of physics. When she had almost finished her Masters program in La Serena she wanted to continue in science abroad. She was admitted to a french (one year) Master program at the University Paris 6 and later did her PhD in the nearby University Cergy-Pontoise. After that she applied for a Marie-Curie fellowship in order to continue her research in Germany. She spent time as postdoc at the Mittag-Leffler-Institut in Stockholm and at CAMTP in Maribor (Slovenia) before moving to the LMU Munich for two years with the fellowship. After that she got the position in Bonn and is now preparing for her next step. Gudrun and Constanza want to thank Tobias Ried who put them in contact. References and further reading
| |||
18 May 2017 | Convolution Quadrature | 00:30:32 | |
This is one of two conversations which Gudrun Thäter recorded alongside the conference Women in PDEs which took place at our Department in Karlsruhe on 27-28 April 2017. Maria Lopez-Fernandez from the University La Sapienza in Rome was one of the seven invited speakers. She got her university degree at the University of Valladolid in Spain and worked as an academic researcher in Madrid and at the University of Zürich. Her field of research is numerical analyis and in particular the robust and efficient approximation of convolutions. The conversation is mainly focussed on its applications to wave scattering problems. The important questions for the numerical tools are: Consistency, stability and convergence analysis. The methods proposed by Maria are Convolution Quadrature type methods for the time discretization coupled with the boundary integral methods for the spatial discretization. Convolution Quadrature methods are based on Laplace transformation and numerical integration. They were initially mostly developed for parabolic problems and are now adapted to serve in the context of (hyperbolic) wave equations. Convolution quadrature methods introduce artificial dissipation in the computation, which stabilzes the numerics. However it would be physically more meaningful to work instead with schemes which conserve mass. She is mainly interested in
The motivational example for her talk was the observation of severe acoustic problems inside a new building at the University of Zürich. Any conversation in the atrium made a lot of noise and if someone was speaking loud it was hard to understand by the others. An improvement was provided by specialised engineers who installed absorbing panels. From the mathematical point of view this is an nice application of the modelling and numerics of wave scattering problems. Of course, it would make a lot of sense to simulate the acoustic situation for such spaces before building them - if stable fast software for the distribution of acoustic pressure or the transport of signals was available. The mathematical challenges are high computational costs, high storage requirements and and stability problems. Due to the nonlocal nature of the equations it is also really hard to make the calculations in parallel to run faster. In addition time-adaptive methods for these types of problems were missing completely in the mathematical literature. In creating them one has to control the numerical errors with the help of a priori and a posteriori estimates which due to Maria's and others work during the last years is in principle known now but still very complicated. Also one easily runs into stability problems when changing the time step size. The acoustic pressure distribution for the new building in Zürich has been sucessfully simulated by co-workers in Zürich and Graz by using these results together with knowledge about the sound-source and deriving heuristic measures from that in order to find a sequence of time steps which keeps the problem stable and adapt to the computations effectively. There is a lot of hope to improve the performance of these tools by representing the required boundary element matrices by approximations with much sparser matrices. References
Podcasts
| |||
25 May 2017 | Cerebral Fluid Flow | 00:35:37 | |
This is one of two conversations which Gudrun Thäter recorded alongside the conference Women in PDEs which took place at our faculty in Karlsruhe on 27-28 April 2017. Marie Elisabeth Rognes was one of the seven invited speakers. Marie is Chief Research Scientist at the Norwegian research laboratory Simula near Oslo. She is Head of department for Biomedical Computing there. Marie got her university education with a focus on Applied Mathematics, Mechanics and Numerical Physics as well as her PhD in Applied mathematics at the Centre for Mathematics for Applications in the Department of Mathematics at the University of Oslo. Her work is devoted to providing robust methods to solve Partial Differential Equations (PDEs) for diverse applications. On the one hand this means that from the mathematical side she works on numerical analysis, optimal control, robust Finite Element software as well as Uncertainty quantification while on the other hand she is very much interested in the modeling with the help of PDEs and in particular Mathematical models of physiological processes. These models are useful to answer What if type-questions much more easily than with the help of laboratory experiments. In our conversation we discussed one of the many applications - Cerebral fluid flow, i.e. fluid flow in the context of the human brain. Medical doctors and biologists know that the soft matter cells of the human brain are filled with fluid. Also the space between the cells contains the water-like cerebrospinal fluid. It provides a bath for human brain. The brain expands and contracts with each heartbeat and appoximately 1 ml of fluid is interchanged between brain and spinal area. What the specialists do not know is: Is there a circulation of fluid? This is especially interesting since there is no traditional lymphatic system to transport away the biological waste of the brain (this process is at work everywhere else in our body). So how does the brain get rid of its litter? There are several hyotheses:
The aim of Marie's work is to numerically test these (and other) hypotheses. Basic testing starts on very idalised geometries. For the overall picture one useful simplified geometry is the annulus i.e. a region bounded by two concentric circles. For the microlevel-look a small cube can be the chosen geometry. As material law the flow in a porous medium which is based on Darcy flow is the starting point - maybe taking into account the coupling with an elastic behaviour on the boundary. The difficult non-mathematical questions which have to be answered are:
In the near future she hopes to better understand the multiscale character of the processes. Here especially for embedding 1d- into 3d-geometry there is almost no theory available. For the project Marie has been awarded a FRIPRO Young Research Talents Grant of the Research Council of Norway (3 years - starting April 2016) and the very prestegious ERC Starting Grant (5 years starting - 2017). References
| |||
12 Oct 2017 | Advanced Mathematics | 01:06:00 | |
Gudrun Thäter and Jonathan Rollin talk about their plans for the course Advanced Mathematics (taught in English) for mechanical engineers at the Karlsruhe Institute of Technology (KIT). The topics of their conversation are relevant in the mathematical education for engineers in general (though the structure of courses differs between universities). They discuss
For students starting an engineering study course it is clear, that a mathematical education will be an important part. Nevertheless, most students are not aware that their experiences with mathematics at school will not match well with the mathematics at university. This is true in many ways. Mathematics is much more than calculations. As the mathematical models become more involved, more theoretical knowledge is needed in order to learn how and why the calculations work. In particular the connections among basic ideas become more and more important to see why certain rules are valid. Very often this knowledge also is essential since the rules need to be adapted for different settings. In their everyday work, engineers combine the use of well-established procedures with the ability to come up with solutions to yet unsolved problems. In our mathematics education, we try to support that skills insofar as we train certain calculations with the aim that they become routine for the future engineers. But we also show the ideas and ways how mathematicians came up with these ideas and how they are applied again and again at different levels of abstraction. This shall help the students to become creative in their engineering career. Moreover seeing how the calculation procedures are derived often helps to remember them. So it makes a lot of sense to learn about proofs behind calculations, even if we usually do not ask to repeat proofs during the written exam at the end of the semester. The course is structured as 2 lectures, 1 problem class and 1 tutorial per week. Moreover there is a homework sheet every week. All of them play their own role in helping students to make progress in mathematics. The lecture is the place to see new material and to learn about examples, connections and motivations. In this course there are lecture notes which cover most topics of the lecture (and on top of that there are a lot of books out there!). So the lecture is the place where students follow the main ideas and take these ideas to work with the written notes of the lecture later on. The theory taught in the lecture becomes more alive in the problem classes and tutorials. In the problem classes students see how the theory is applied to solve problems and exercises. But most importantly, students must solve problems on their own, with the help of the material from the lecture. Only in this way they learn how to use the theory. Very often the problems seem quite hard in the sense that it is not clear how to start or proceed. This is due to the fact that students are still learning to translate the information from the lecture to a net of knowledge they build for themselves. In the tutorial the tutor and the fellow students work together to find first steps onto a ladder to solving problems on the homework. Gudrun and Jonathan love mathematics. But from their own experience they can understand why some of the students fear mathematics and expect it to be too difficult to master. They have the following tips:
In the lecture course, students see the basic concepts of different mathematical fields. Namely, it covers calculus, linear algebra, numerics and stochastics. Results from all these fields will help them as engineers to calculate as well as to invent. There is no standard or best way to organize the topics since there is a network of connections inbetween results and a lot of different ways to end up with models and calculation procedures. In the course in Karlsruhe in the first semester we mainly focus on calculus and touch the following subjects:
All of these topics have applications and typical problems which will be trained in the problem class. But moreover they are stepping stones in order to master more and more complex problems. This already becomes clear during the first semester but will become more clear at the end of the course. Literature and related information
Podcasts
| |||
16 Nov 2017 | Weather Generator | 00:38:24 | |
Gudrun is speaking with the portuguese engineer Bruno Pousinho. He has been a student of the Energy Technologies (ENTECH) Master program. This is an international and interdisciplinary program under the label of the European Institute of Innovation and Technology (EIT) inbetween a number of European technical universities. Bruno spent his second master year at the Karlsruhe Institute of Technology (KIT). Gudrun had the role of his supervisor at KIT while he worked on his Master's thesis at the Chair of Renewable and Sustainable Energy Systems (ENS) at TUM in Garching. His direct contact person there was Franz Christange from the group of Prof. Thomas Hamacher. Renewable energy systems are a growing part of the energy mix. In Germany between 1990 and 2016 it grew from 4168 GW to 104024 GW. This corresponds to an annual power consumption share of 3.4% and 31.7%, respectively. But in the related research this means a crucial shift. The conventional centralized synchronous machine dominated models have to be exchanged for decentralized power electronic dominated networks - so-called microgrids. This needs collaboration of mechanical and electrical engineers. The interdisciplinary group at TUM has the goal to work on modeling future microgrids in order to easily configure and simulate them. One additional factor is that for most renewable energy systems it is necessary to have the right weather conditions. Moreover, there is always the problem of reliability. Especially for Photovoltaics (PV) and wind turbines Weather phenomena as solar irradiation, air temperature and wind speed have to be known in advance in order to plan for these types of systems. There are two fundamentally different approaches to model weather data. Firstly the numerical weather and climate models, which provide the weather forecast for the next days and years. Secondly, so-called weather generators. The numerical models are very complex and have to run on the largest computer systems available. For that in order to have a simple enough model for planning the Renewable energy resources (RER) at a certain place weather generators are used. They produce synthetic weather data on the basis of the weather conditions in the past. They do not predict/forecast the values of a specific weather phenomenon for a specific time but provides random simulations whose outputs show the same or very similar distributional properties as the measured weather data in the past. The group in Garching wanted to have a time dynamic analytical model. The model is time continuous which grant it the ability of having any time sampling interval. This means it wanted to have a system of equations for the generation of synthetic weather data with as few as possible parameters. When Bruno started his work, there existed a model for Garching (developped by Franz Christange) with about 60 parameters. The aim of Bruno's work was to reduce the number of parameters and to show that the general concept can be used worldwide, i.e. it can adapt to different weather data in different climate zones. In the thesis the tested points range from 33º South to 40º North. The results of the project are published in the open source model 'solfons' in Github, which uses Python and was developed in MATLAB. References
Podcasts
| |||
01 Feb 2018 | Turbulence | 00:30:11 | |
Martina Hofmanová has been working as a professor at the University of Bielefeld since October 2017. Previously, she was a Junior Professor at TU Berlin from February 2016 onwards, and before that an Assistant Lecturer, there. She studied at the Charles University in Prague, and got her PhD in 2013 at the École normale supérieure de Cachan in Rennes. Her time in Germany started in 2013 when she moved to the Max Planck Institute for Mathematics in the Sciences in Leipzig as a postdoc. A general procedure in physics and science is to replace expensive time averages by ensemble averages, which can be calculated together on a parallel computer. The concept why this often works is the so-called ergodic hypothesis. To justify this from the mathematical side, the main problem is to find the right measure in the ensemble average. In the model problem Already with this toy problem one sees that the justification of using ensemble averages is connected to the well-posedness of the problem. In general, this is not apriori known. The focus of Martina's work is to find the existence of steady solutions for the compressible flow system, including stochastic forces with periodic boundary conditions (i.e. on the torus). At the moment, we know that there are global weak solutions but only local (in time) strong solutions. It turned out that the right setting to study the problem are so-called dissipative martingale solutions: Unfortunately, in this setting, the velocity is not smooth enough to be a stochastic process. But the energy inequality can be proved. The proof rests on introducing artificial dissipation in the mass conservation, and a small term with higher order regularity for the density. Then, the velocity is approximated through a Faedo-Galerkin approximation and a lot of independent limiting processes can be carried out successfully. The project is a collaboration with Dominic Breit and Eduard Feireisl. References
Podcasts
| |||
29 Mar 2018 | Embryonic Patterns | 00:51:30 | |
In March 2018 Gudrun visited University College London and recorded three conversations with mathematicians working there. Her first partner was Karen Page. She works in Mathematical Biology and is interested in mathematical models for pattern formation. An example would be the question why (and how) a human embryo develops five fingers on each hand. The basic information for that is coded into the DNA but how the pattern develops over time is a very complicated process which we understand only partly. Another example is the patterning of neurons within the vertebrate nervous system. The neurons are specified by levels of proteins. Binding of other proteins at the enhancer region of DNA decides whether a gene produces protein or not. This type of work needs a strong collaboration with biologists who observe certain behaviours and do experiments. Ideally they are interested in the mathematical tools as well. One focus of Karen's work is the development of the nervous system in its embryonic form as the neural tube. She models it with the help of dynamical systems. At the moment they contain three ordinary differential equations for the temporal changes in levels of three proteins. Since they influence each other the system is coupled. Moreover a fourth protein enters the system as an external parameter. It is called sonic hedgehog (Shh). It plays a key role in regulating the growth of digits on limbs and organization of the brain. It has different effects on the cells of the developing embryo depending on its concentration. Concerning the mathematical theory the Poincaré Bendixson theorem completely characterizes the long-time behaviour of two-dimensional dynamical systems. Working with three equations there is room for more interesting long-term scenarios. For example it is possible to observe chaotic behaviour. Karen was introduced to questions of Mathematical Biology when starting to work on her DPhil. Her topic was Turing patterns. These are possible solutions to systems of Partial differential equations that are thermodynamically non-equilibrium. They develop from random perturbations about a homogeneous state, with the help of an input of energy. Prof. Page studied mathematics and physics in Cambridge and did her DPhil in Oxford in 1999. After that she spent two years at the Institute for Advanced Study in Princeton and has been working at UCL since 2001. References
Podcasts
| |||
05 Apr 2018 | Singular Pertubation | 00:21:57 | |
Gudrun had two podcast conversations at the FEniCS18 workshop in Oxford (21.-23. March 2018). FEniCS is an open source computing platform for solving partial differential equations with Finite Element methods. This is the first of the two episodes from Oxford in 2018. Roisin Hill works at the National University of Ireland in Galway on the west coast of Ireland. The university has 19.000 students and 2.000 staff. Roisin is a PhD student in Numerical Analysis at the School of Mathematics, Statistics & Applied Mathematics. Gudrun met her at her poster about Balanced norms and mesh generation for singularly perturbed reaction-diffusion problems. This is a collaboration with Niall Madden who is her supervisor in Galway. The name of the poster refers to three topics which are interlinked in their research. Firstly, water flow is modelled as a singularly perturbed equation in a one-dimensional channel. Due to the fact that at the fluid does not move at the boundary there has to be a boundary layer in which the flow properties change. This might occur very rapidly. So, the second topic is that depending on the boundary layer the problem is singularly perturbed and in the limit it is even ill-posed. When solving this equation numerically, it would be best, to have a fine mesh at places where the error is large. Roisin uses a posteriori information to see where the largest errors occur and changes the mesh accordingly. To choose the best norm for errors is the third topic in the mix and strongly depends on the type of singularity. More precisely as their prototypical test case they look for u(x) as the numerical solution of the problem for given functions b(x) and f(x). It is singularly perturbed in the sense that the positive real parameter ε may be arbitrarily small. If we formally set ε = 0, then it is ill-posed. The numercial schemes of choice are finite element methods - implemented in FEniCS with linear and quadratic elements. The numerical solution and its generalisations to higher-dimensional problems, and to the closely related convection-diffusion problem, presents numerous mathematical and computational challenges, particularly as ε → 0. The development of algorithms for robust solution is the subject of intense mathematical investigation. Here “robust” means two things:
In order to measure the error, the energy norm sounds like a good basis - but as ε^2 → 0 the norm → 0 with order ε . They were looking for an alternative which they found in the literature as the so-called balanced norm. That remains O(1) as ε → 0. Therefore, it turns out that the balanced norm is indeed a better basis for error measurement. After she finished school Roisin became an accountant. She believed what she was told: if you are good at mathematics, accountancy is the right career. Later her daughter became ill and had to be partially schooled at home. This was the moment when Roisin first encountered applied mathematics and fell in love with the topic. Inspired by her daughter - who did a degree in general science specialising in applied mathematics - Roisin studied mathematics and is a PhD student now (since Sept. 2017). Her enthusiasm has created impressive results: She won a prestigious Postgraduate Scholarship from the Irish Research Council for her four year PhD program. References
Podcasts
| |||
24 May 2018 | Automatic Differentiation | 00:34:57 | |
Gudrun talks with Asher Zarth. He finished his Master thesis in the Lattice Boltzmann Research group at the Karlsruhe Institute for Technology (KIT) in April 2018. Lattice Boltzmann methods (LBM) are an established method of computational fluid dynamics. Also, the solution of temperature-dependent problems - modeled by the Boussinesq approximation - with LBM has been done for some time. Moreover, LBM have been used to solve optimization problems, including parameter identification, shape optimization and topology optimization. Usual optimization approaches for partial differential equations are strongly based on using the corresponding adjoint problem. Especially since this method provides the sensitivities of quantities in the optimization process as well. This is very helpful. But it is also very hard to find the adjoint problem for each new problem. This needs a lot of experience and deep mathematical understanding. For that, Asher uses automatic differentiation (AD) instead, which is very flexible and user friendly. His algorithm combines an extension of LBM to porous media models as part of the shape optimization framework. The main idea of that framework is to use the permeability as a geometric design parameter instead of a rigid object which changes its shape in the iterative process. The optimization itself is carried out with line search methods, whereby the sensitivities are calculated by AD instead of using the adjoint problem. The method benefits from a straighforward and extensible implementation as the use of AD provides a way to obtain accurate derivatives with little knowledge of the mathematical formulation of the problem. Furthermore, the simplicity of the AD system allows optimization to be easily integrated into existing simulations - for example in the software package OpenLB which Asher used in his thesis. One example to test the algorithm is the shape of an object under Stokes flow such that the drag becomes minimal. It is known that it looks like an american football ball. The new algorithm converges fast to that shape. References
| |||
28 Jun 2018 | Algebraic Geometry | 00:51:28 | |
Gudrun spent an afternoon at the Max Planck Institute for Mathematics in the Sciences (MPI MSI) in Leipzig. There she met the Colombian mathematician Eliana Maria Duarte Gelvez. Eliana is a PostDoc at the MPI MSI in the Research group in Nonlinear Algebra. Its head is Bernd Sturmfels. They started the conversation with the question: What is algebraic geometry? It is a generalisation of what one learns in linear algebra insofar as it studies properties of polynomials such as its roots. But it considers systems of polynomial equations in several variables so-called multivariate polynomials. There are diverse applications in engineering, biology, statistics and topological data analysis. Among them Eliana is mostly interested in questions from computer graphics and statistics. In any animated movie or computer game all objects have to be represented by the computer. Often the surface of the geometric objects is parametrized by polynomials. The image of the parametrization can as well be defined by an equation. For calculating interactions it can be necessary to know what is the corresponding equation in the three usual space variables. One example, which comes up in school and in the introductory courses at university is the circle. Its representation in different coordinate systems or as a parametrized curve lends itself to interesting problems to solve for the students. Even more interesting and often difficult to answer is the simple question after the curve of the intersection of surfaces in the computer representation if these are parametrized objects. Moreover real time graphics for computer games need fast and reliable algorithms for that question. Specialists in computer graphics experience that not all curves and surfaces can be parametrized. It was a puzzling question until they talked to people working in algebraic geometry. They knew that the genus of the curve tells you about the possible vs. impossible parametrization. For the practical work symbolic algebra packages help. They are based on the concept of the Gröbner basis. Gröbner basis help to translate between representations of surfaces and curves as parametrized objects and graphs of functions. Nevertheless, often very long polynomials with many terms (like 500) are the result and not so straightforward to analyse. A second research topic of Eliana is algebraic statistics. It is a very recent field and evolved only in the last 20-30 years. In the typical problems one studies discrete or polynomial equations using symbolic computations with combinatorics on top. Often numerical algebraic tools are necessary. It is algebraic in the sense that many popular statistical models are parametrized by polynomials. The points in the image of the parameterization are the probability distributions in the statistical model. The interest of the research is to study properties of statistical models using algebraic geometry, for instance describe the implicit equations of the model. Eliana already liked mathematics at school but was not always very good in it. When she decided to take a Bachelor course in mathematics she liked the very friendly environment at her faculty in the Universidad de los Andes, Bogotá. She was introduced to her research field through a course in Combinatorial commutative algebra there. She was encouraged to apply for a Master's program in the US and to work on elliptic curves at Binghamton University (State University of New York) After her Master in 2011 she stayed in the US to better understand syzygies within her work on a PhD at the University of Illinois at Urbana-Champaign. Since 2018 she has been a postdoc at the MPI MSI in Leipzig and likes the very applied focus especially on algebraic statistics. In her experience Mathematics is a good topic to work on in different places and it is important to have role models in your field. References
Podcasts
| |||
12 Jul 2018 | Dynamical Sampling | 00:33:23 | |
Gudrun met the USA-based mathematician Roza Aceska from Macedonia in Turin at the Conference MicroLocal and Time-Frequency Analysis 2018.
The topic of the recorded conversation is dynamical sampling. The situation which Roza and other mathematician study is: There is a process which develops over time which in principle is well understood. In mathematical terms this means we know the equation which governs our model of the process or in other words we know the family of evolution operators. Often this is a partial differential equation which accounts for changes in time and in 1, 2 or 3 spatial variables. This means, if we know the initial situation (i.e. the initial conditions in mathematical terms), we can numerically calculate good approximations for the instances the process will have at all places and at all times in the future. But in general when observing a process life is not that well sorted. Instead we might know the principal equation but only through (maybe only a few) measurements we can find information about the initial condition or material constants for the process. This leads to two questions: How many measurements are necessary in order to obtain the full information (i.e. to have exact knowledge)? Are there possibilities to choose the time and the spatial situation of a measurement so clever as to gain as much as possible new information from any measurement? These are mathematical questions which are answered through studying the equations. The science of sampling started in the 1940s with Claude Shannon who found fundamental limits of signal processing. He developed a precise framework - the so-called information theory. Sampling and reconstruction theory is important because it serves as a bridge between the modern digital world and the analog world of continuous functions. It is surprising to see how many applications rely on taking samples in order to understand processes. A few examples in our everyday life are: Audio signal processing (electrical signals representing sound of speech or music), image processing, and wireless communication. But also seismology or genomics can only develop models by taking very intelligent sample measurements, or, in other words, by making the most scientific sense out of available measurements. The new development in dynamical sampling is, that in following a process over time it might by possible to find good options to gain valuable information about the process at different time instances, as well as different spatial locations. In practice, increasing the number of spatially used sensors is more expensive (or even impossible) than increasing the temporal sampling density. These issues are overcome by a spatio-temporal sampling framework in evolution processes. The idea is to use a reduced number of sensors with each being activated more frequently. Roza refers to a paper by Enrique Zuazua in which he and his co-author study the heat equation and construct a series of later-time measurements at a single location throughout the underlying process. The heat equation is prototypical and one can use similar ideas in a more general setting. This is one topic on which Roza and her co-workers succeeded and want to proceed further. After Roza graduated with a Ph.D. in Mathematics at the University of Vienna she worked as Assistant Professor at the University Ss Cyril and Methodius in Skopje (Macedonia), and after that at the Vanderbilt University in Nashville (Tennessee). Nowadays she is a faculty member of Ball State University in Muncie (Indiana). References
Related Podcasts
| |||
02 Aug 2018 | Mechanical Engineering | 00:53:29 | |
In the last two semesters Gudrun has taught the courses Advanced Mathematics I and II for Mechanical Engineers. This is a mandatory lecture for the International mechanical engineering students at KIT in their first year of the Bachelor program. This program is organized by the Carl Benz School of Engineering. Beside the study courses, the school also provides common housing for students coming to Karlsruhe from all over the world. The general structure and topics of the first year in Advanced Mathematics were already discussed in our episode 146 Advanced Mathematics with Jonathan Rollin. This time Gudrun invited two students from her course to have the student's perspective, talking about mathematics, life, and everything. The second student in the conversation is Siddhant Dhanrajani. His family is Indian but lives in Dubai. For that he got his education in Dubai in an Indian community follwowing the Indian educational system (CBSE). He had never heard of the Engineering program in Karlsruhe but found it through thourough research. He is really amazed at how such an excellent study program and such an excellent university as the KIT are not better known for their value in the world. In the conversation both students talk about their education in their respective countries, their hopes and plans for the study course mechanical engineering and their experiences in the first year here in Karlsruhe. It is very interesting to see how the different ways to teach mathematics, namely, either as a toolbox full of recipes (which the students get well-trained in) or secondly as a way to approach problems in a context of a mathematical education contribute to an experience to be well-equipped to work creative and with a lot of potential as an engineer. Though the students finished only the first year in a three years course they already work towards applications and necessary certificates for their possible master program after finishing the course in Karlsruhe. Related Podcasts
| |||
11 Oct 2018 | SimScale | 00:36:44 | |
Gudrun talks to Jousef Murad about the computing platform SimScale. Jousef is currently studying mechanical engineering at the Karlsruhe Institute of Technology (KIT) and focuses on turbulence modelling and computational mechanics in his Master's studies. He first learned about the existence of SimScale early in the year 2015 and started as a FEA (finite element analysis) simulation assistant in November 2016. Meanwhile he switched to Community Management and now is Community and Academic Program Manager at the company being responsible for user requests and Formula student teams all over the world. Formula student is a name for design competitions for teams of students constructing racing cars. SimScale is a cloud-based platform that gives instant access to computational fluid dynamics (CFD) and finite element analysis (FEA) simulation technology, helping engineers and designers to easily test performance, optimize durability or improve efficiency of their design. SimScale is accessible from a standard web browser and from any computer, eliminating the hurdles that accompany traditional simulation tools: high installation costs, licensing fees, deployment of high-performance computing hardware, and required updates and maintenance. Via the platform, several state-of-the-art open solvers are made available like,e.g., OpenFOAM and Meshing with SnappyHexMesh. More information about the packages being used can be found at https://www.simscale.com/open-source/ . On top of having easier access to open source software, the connected user forum is very active and helps everybody to enter the field even as a person without experience. Founded in 2012 in Munich (Germany), nowadays SimScale is an integral part of the design validation process for many companies worldwide and individual users. It is mainly used by product designers and engineers working in Architecture, Engineering & Construction or Heating, Ventilation & Air-Conditioning. Also in the Electronics, Consumer Goods and Packaging and Containers industries SimScale is useful for testing and optimizing designs in the early development stages. SimScale offers pricing plans that can be customized, from independent professionals to SMEs and multinational companies. The Community plan makes it possible to use SimScale for free, with 3000 core hours/year using up to 16 cloud computing cores. References
Related Podcasts
| |||
19 Oct 2018 | Electric Vehicles on the Grid | 00:49:03 | |
Gudrun talks to Zaheer Ahamed about the influence of an increasing number of Electric vehicles (EV) to the electrical grid. Zaheer just finished the ENTECH Master's program. He started it with his first year at the Karlsruhe Institute for Technology (KIT) and continued in Uppsala University for the second year.
References
Podcasts
| |||
09 Nov 2018 | Micro Grids | 00:31:56 | |
Gudrun talks with the Scotish engineer Claire Harvey. After already having finished a Master's degree in Product design engineering at the University of Glasgow for the last two years Claire has been a student of the Energy Technologies (ENTECH) Master program. This is an international and interdisciplinary program under the label of the European Institute of Innovation and Technology (EIT) inbetween a number of European technical universities. She spent her first year in Lisbon at Instituto Superior Técnico (IST) and the second master year at the Karlsruhe Institute of Technology (KIT). Gudrun had the role of her supervisor at KIT while she worked on her Master's thesis at the EUREF Campus in Berlin for the Startup inno2grid. Her study courses prepared her for very diverse work in the sector of renewable energy. Her decision to work with inno2grid in Berlin was based on the fact, that it would help to pave the way towards better solutions for planning micro grids and sustainable districts. Also, she wanted to see an actual micro grid at work. The office building of Schneider Electric, where the Startup inno2grid has its rooms is an experiment delivering data of energy production and consumption while being a usual office building. We will hear more about that in the episode with Carlos Mauricio Rojas La Rotta soon. Micro grids are small scale electrical grid systems where self-sufficient supply is achieved. Therefore, the integration of micro grid design within district planning processes should be developed efficiently. In the planning process of districts with decentralised energy systems, unique and customised design of micro grids is usually required to meet local technical, economical and environmental needs. From a technical standpoint, a detailed understanding of factors such as load use, generation potential and site constraints are needed to correctly and most efficiently design and implement the network. The presence of many different actors and stakeholders contribute to the complexity of the planning process, where varying levels of technical experience and disparate methods of working across teams is commonplace. Large quantities of digital information are required across the whole life-cycle of a planning project, not just to do with energetic planning but also for asset management and monitoring after a micro grid has been implemented. In the design of micro grids, large amounts of data must be gathered, there are initial optimization objectives to be met, and simulating control strategies of a district which are adapted to customer requirements is a critical step. Linking these processes - being able to assemble data as well as communicate the results and interactions of different "layers" of a project to stakeholders are challenges that arise as more cross-sector projects are carried out, with the growing interest in smart grid implementation. Claire's thesis explores tools to assist the planning process for micro grids on the district scale. Using geographical information system (GIS) software, results relating to the energetic planning of a district is linked to geo-referenced data. Layers related to energy planning are implemented - calculating useful parameters and connecting to a database where different stakeholders within a project can contribute. Resource potential, electrical/thermal demand and supply system dimensioning can be calculated, which is beneficial for clients and decision makers to visualize digital information related to a project. Within the open source program QGIS, spatial analysis and optimizations relating to the design of an energy system are performed. As the time dimension is a key part in the planning of the energy supply system of a micro grid, the data is linked to a Python simulation environment where dynamic analysis can be performed, and the results are fed back in to the QGIS project. References
Podcasts
| |||
06 Dec 2018 | Inno2Grid | 00:35:44 | |
Gudrun talks to Carlos Mauricio Rojas La Rotta. They use a Skype connection since Carlos is in Berlin and Gudrun in Karlsruhe. Carlos is an electrical engineer from Colombia. His first degree is from Pontifcia Universidad Javeriana in Bogotá. For five years now he has been working at Schneider Electric in Berlin. In September 2018 Gudrun met Carlos at the EUREF-Campus in Berlin for discussing the work of Claire Harvey on her Master's thesis. The schedule on that day was very full but Gudrun and Carlos decided to have a Podcast conversation later. Carlos came to Germany as a car enthusiast. Then he got excited about the possibilities of photovoltaic energy production. For that from 2005-2007 he studied in the Carl von Ossietzky Universität in Oldenburg in the PPRE Master course Renewable Energies. When he graduated within a group of about 20 master students they found a world ready for their knowledge. Carlos worked in various topics and in different parts of Germany in the field of renewable energies. Now, at Schneider he has the unique situation, that he can combine all his interests. He develops the most modern cars, which are driving with renewable energy. In the course of his work he is also back at his original love: working with electronics, protocols and data. The work on the EUREF-Campus in Berlin started about 8-10 years ago with more questions than clear ideas. Schneider Electric is a big company with about 150.000 employees all over the world. They deal in all types of software and hardware devices for energy delivery. But the topic for Berlin was completely new: It was a test case how to construct energy sustainable districts. They started out investing in e-mobility with renewable energy and making their own offices a smart building. It is a source of a lot of data telling the story how energy is produced and consumed. At the moment they collect 1GB data per day in the office building on about 12.000 measure points into database and build this as a benchmark to compare it to other scenarios. The next step now is also to find ways to optimize these processes with limited computational possibilities. This is done with open source code on their own interface and at the moment it can optimize in the micro smart grid on the Campus. For example with 40 charging points for e-cars - consumption is planned according to production of energy. On Campus traditional batteries are used to buffer the energy, and also a bus now works on the Campus which can be discharged and is loaded without a cable! One can say: Carlos is working in a big experiment. This does not only cover a lot of new technical solutions. The Energiewende is more than putting photovoltaic and wind power out. We as a society have to change and plan differently - especially concerning mobility. Schneider Electric just started an expansion phase to the whole campus, which has a size of 5.5 ha and 2500 people working there. More than 100 charging point for e-cars will be available very soon. Podcasts
| |||
21 Dec 2018 | Energy Markets | 01:05:00 | |
Gudrun Talks to Sema Coşkun who at the moment of the conversation in 2018 is a Post Doc researcher at the University Kaiserslautern in the group of financial mathematics. She constructs models for the behaviour of energy markets. In short the conversation covers the questions
The seminal work of Black and Scholes (1973) established the modern financial theory. In a Black-Scholes setting, it is assumed that the stock price follows a Geometric Brownian Motion with a constant drift and constant volatility. The stochastic differential equation for the stock price process has an explicit solution. Therefore, it is possible to obtain the price of a European call option in a closed-form formula. Nevertheless, there exist drawbacks of the Black-Scholes assumptions. The most criticized aspect is the constant volatility assumption. It is considered an oversimplification. Several improved models have been introduced to overcome those drawbacks. One significant example of such new models is the Heston stochastic volatility model (Heston, 1993). In this model, volatility is indirectly modeled by a separate mean reverting stochastic process, namely. the Cox-Ingersoll-Ross (CIR) process. The CIR process captures the dynamics of the volatility process well. However, it is not easy to obtain option prices in the Heston model since the model has more complicated dynamics compared to the Black-Scholes model. In financial mathematics, one can use several methods to deal with these problems. In general, various stochastic processes are used to model the behavior of financial phenomena. One can then employ purely stochastic approaches by using the tools from stochastic calculus or probabilistic approaches by using the tools from probability theory. On the other hand, it is also possible to use Partial Differential Equations (the PDE approach). The correspondence between the stochastic problem and its related PDE representation is established by the help of Feynman-Kac theorem. Also in their original paper, Black and Scholes transferred the stochastic representation of the problem into its corresponding PDE, the heat equation. After solving the heat equation, they transformed the solution back into the relevant option price. As a third type of methods, one can employ numerical methods such as Monte Carlo methods. Monte Carlo methods are especially useful to compute the expected value of a random variable. Roughly speaking, instead of examining the probabilistic evolution of this random variable, we focus on the possible outcomes of it. One generates random numbers with the same distribution as the random variable and then we simulate possible outcomes by using those random numbers. Then we replace the expected value of the random variable by taking the arithmetic average of the possible outcomes obtained by the Monte Carlo simulation. The idea of Monte Carlo is simple. However, it takes its strength from two essential theorems, namely Kolmogorov’s strong law of large numbers which ensures convergence of the estimates and the central limit theorem, which refers to the error distribution of our estimates. Electricity markets exhibit certain properties which we do not observe in other markets. Those properties are mainly due to the unique characteristics of the production and consumption of electricity. Most importantly one cannot physically store electricity. This leads to several differences compared to other financial markets. For example, we observe spikes in electricity prices. Spikes refer to sudden upward or downward jumps which are followed by a fast reversion to the mean level. Therefore, electricity prices show extreme variability compared to other commodities or stocks. For example, in stock markets we observe a moderate volatility level ranging between 1% and 1.5%, commodities like crude oil or natural gas have relatively high volatilities ranging between 1.5% and 4% and finally the electricity energy has up to 50% volatility (Weron, 2000). Moreover, electricity prices show strong seasonality which is related to day to day and month to month variations in the electricity consumption. In other words, electricity consumption varies depending on the day of the week and month of the year. Another important property of the electricity prices is that they follow a mean reverting process. Thus, the Ornstein-Uhlenbeck (OU) process which has a Gaussian distribution is widely used to model electricity prices. In order to incorporate the spike behavior of the electricity prices, a jump or a Levy component is merged into the OU process. These models are known as generalized OU processes (Barndorff-Nielsen & Shephard, 2001; Benth, Kallsen & Meyer-Brandis, 2007). There exist several models to capture those properties of electricity prices. For example, structural models which are based on the equilibrium of supply and demand (Barlow, 2002), Markov jump diffusion models which combine the OU process with pure jump diffusions (Geman & Roncoroni, 2006), regime-switching models which aim to distinguish the base and spike regimes of the electricity prices and finally the multi-factor models which have a deterministic component for seasonality, a mean reverting process for the base signal and a jump or Levy process for spikes (Meyer-Brandis & Tankov, 2008). The German electricity market is one of the largest in Europe. The energy strategy of Germany follows the objective to phase out the nuclear power plants by 2021 and gradually introduce renewable energy ressources. For electricity production, the share of renewable ressources will increase up to 80% by 2050. The introduction of renewable ressources brings also some challenges for electricity trading. For example, the forecast errors regarding the electricity production might cause high risk for market participants. However, the developed market structure of Germany is designed to reduce this risk as much as possible. There are two main electricity spot price markets where the market participants can trade electricity. The first one is the day-ahead market in which the trading takes place around noon on the day before the delivery. In this market, the trades are based on auctions. The second one is the intraday market in which the trading starts at 3pm on the day before the delivery and continues up until 30 minutes before the delivery. Intraday market allows continuous trading of electricity which indeed helps the market participants to adjust their positions more precisely in the market by reducing the forecast errors. References
Podcasts
| |||
22 Feb 2019 | Portrait of Science | 01:06:20 | |
Gudrun met Magdalena Gonciarz in Dresden. They sat down in a very quiet Coffeeshop in Dreikönigskirche and talked about their experiences as scientists giving science an image. Magda started Portrait of science in 2016 with two objectives: to show that science is a process with many contributors at all carreer levels and to have a get-away from a demanding PhD-project, to express her creativity and have tangible results. The person who pointed Gudrun in Magda's direction is Lennart Hilbert, a former co-worker of Magda in Dresden who is now working at KIT on Computational Architectures in the Cell Nucleus (he will be a podcast guest very soon). On the Portrait of Science page one can find photographs of people from Dresden's Life Science campus. Apart from the photographs, one can also find their stories. How and why did they become scientists? What do they do, what are they passionate about? Magda invites us: "Forget the tubes and Erlenmeyer flasks. Science is only as good as the people who do it. So sit back, scroll down and get to know them looking through the lens of Magdalena Gonciarz. Have you ever wondered what kind of people scientists are? Would you like to know what are they working on? What drives and motivates them - spending days in the basement without the sun? Portrait of Science project aims at uncovering more about people who contribute to science at all levels - Research Group Leaders, Postdocs, PhD Students, Staff Scientists and Technicians. All of them are vital for progress of scientific research and all of them are passionate people with their own motivations." When she started the Portrait of Science project, Magda challenged herself to take more pictures. She wanted to show the real people behind science and their personality. This was a creative task, quite different from her work as scientist - done with comparably little time. On top of taking the pictures, interviewees were asked to fill out a questionaire to accompany the story told by the photographs. Surprisingly, the stories told by her co-workers turned out to be quite inspiring. The stories told have shown the passion and the diverse motivations. People mentioned their failures as well. There were stories about accidents and their crucial role in carreers, about coincidence of finding a fascinating book or the right mentor - even as far back as in early childhood sometimes. Sharing ups and downs and the experience that there is a light at the end of the tunnel was a story she needed and which was worth to be shared. Knowing how hard scientific work can be, and how multiple friends and colleagues struggled more than she herself, Magda still strongly feels that it is useful to show that this is not a private and unique experience, but probably a part of the life of every scientist. This struggle can be overcome with time, effort, and help. Magda comes from Poland. During her Master's studies, she had an opportunity to do a research placement at the University of Virginia. During that time she felt welcomed as part of a scientific community in which she wanted to stay. It was a natural decision to proceed with a PhD. She applied to the very prestigious Dresden International Graduate School for Biomedicine and Bioengineering and joined the biological research on proteins and their modifications in the lab of Jörg Mansfeld. After finishing her project, she decided to leave academia. Since 2018 she works for a learning and training agency CAST PHARMA and is involved in producing e-Learning solutions for pharmaceutical companies. Magda also talked a bit about her PhD research. As we all know, genes code for proteins. However, one protein can exist in multiple different forms with multiple varying functions. A protein can be post-translationallly modified, i.e., modified after it is created in order to e.g., be relocated, have different interaction partners or become activated or destroyed in a manner of minutes. Recently, modern methods such as mass spectrometry, made it possible to see the multitude of post-translationally modified forms of proteins and allowed further research with the use of biochemistry or imaging techniques to gain insight into functions of these modifications, e.g., at different stages of the cell life. Gudrun and Magda also talked about the challenge to make a broader audience understand what the particular research topic is all about. It is hard to refer to things we cannot see. It is often easier for people with more translatable research to connect it to various diseases, e.g., cancer but still creates a challenge for those working with more basic issues such as developmental biology. What Magda took from her time in academia is much more than her results and her part in the basic research story. She feels that curiosity and quick learning skills are her superpowers. She is able to become familiar with any topic in a short amount of time. She can manage multiple parts of a project. Also she learned resilience and how to deal with challenges and failures on a daily basis, which can prove to be helpful in all areas of life. Podcasts
| |||
11 Jul 2019 | Batteries | 00:56:57 | |
In June 2019 Gudrun talked with Serena Carelli. Serena is member of the Research Training Group (RTG) Simet, which is based in Karlsruhe, Ulm and Offenburg. It started its work in 2017 and Gudrun is associated postdoc therein. The aim of that graduate school is to work on the better understanding of Lithium-ion batteries. For that it covers all scales, namley from micro (particles), meso (electrodes as pairs) to macro (cell) and involves scientists from chemistry, chemical engineering, material sciences, electro engineering, physics and mathematics. The group covers the experimental side as well as modeling and computer simulations. Serena is one of the PhD-students of the program. She is based in Offenburg in the group of Wolfgang Bessler (the deputy speaker of the RTG). Her research focusses on End-of-life prediction of a lithium-ion battery cell by studying the mechanistic ageing models of the graphite electrode among other things.
A time-upscaling methodology is developed that allows to simulate large time spans (thousands of operating hours). The combined modeling and simulation framework is able to predict calendaric and cyclic ageing up to the end of life of the battery cells. The results show a qualitative agreement with ageing behavior known from experimental literature. Serena has a Bachelor in Chemistry and a Master's in Forensic Chemistry from the University of Torino. She worked in Spain, the Politécnico de Torino and in Greece (there she was Marie Curie fellow at the Foundation for Research and Technology - Hellas) before she decided to spend time in Australia and India. References
Podcasts
| |||
17 Oct 2019 | Cancer Research | 00:24:06 | |
Gudrun talks with Changjing Zhuge. He is a guest in the group of Lennart Hilbert and works at the College of applied sciences and the Beijing Institute for Scientific and Engineering Computing (BISEC) at the Beijing University of Technology. He is a mathematician who is interested in system biology. In some cases he studies delay differential equations or systems of ordinary differential equations to characterize processes and interactions in the context of cancer research. The inbuilt delays originate e.g. from the modeling of hematopoietic stem cell populations. Hematopoietic stem cells give rise to other blood cells. Chemotherapy is frequently accompanied by unwished for side effects to the blood cell production due to the character of the drugs used. Often the production of white blood cells is hindered, which is called neutropenia. In an effort to circumvent that, together with chemotherapy, one treats the patient with granulocyte colony stimulating factor (G-CSF). To examine the effects of the typical periodic chemotherapy in generating neutropenia, and the corresponding response of this system to given to G-CSF Changjing and his colleagues studied relatively simple but physiologically realistic mathematical models for the hematopoietic stem cells. And these models are potential for modeling of other stem-like biosystems such as cancers. The delay in the system is related to the platelet maturation time and the differentiation rate from hematopoietic stem cells into the platelet cell. Changjing did his Bachelor in Mathematics at the Beijing University of Technology (2008) and continued with a PhD-program in Mathematics at the Zhou-Peiyuan Center for Applied Mathematics, Tsinghua University, China. He finished his PhD in 2014. During his time as PhD student he also worked for one year in Michael C Mackey's Lab at the Centre for Applied Mathematics in Bioscience and Medicine of the McGill University in Montreal (Canada). References
Podcasts
| |||
31 Oct 2019 | Peaked Waves | 00:36:19 | |
Gudrun talks to Anna Geyer. Anna is Assistant professer at TU Delft in the Mathematical Physics group at the Delft Institute of Applied Mathematics. She is interested in the behaviour of solutions to equations which model shallow water waves. The day before (04.07.2019) Anna gave a talk at the Kick-off meeting for the second funding period of the CRC Wave phenomena at the mathematics faculty in Karlsruhe, where she discussed instability of peaked periodic waves. Therefore, Gudrun asks her about the different models for waves, the meaning of stability and instability, and the mathematical tools used in her field. For shallow water flows the solitary waves are especially fascinating and interesting. Traveling waves are solutions of the form representing waves of permanent shape f that propagate at constant speed c. These waves are called solitary waves if they are localized disturbances, that is, if the wave profile f decays at infinity. If the solitary waves retain their shape and speed after interacting with other waves of the same type, we say that the solitary waves are solitons. One can ask the question if a given model equation (sometimes depending on parameters in the equation or the size of the initial conditions) allows for solitary or periodic traveling waves, and secondly whether these waves are stable or unstable. Peaked periodic waves are an interesting phenomenon because at the wave crest (the peak) they are not smooth, a situation which might lead to wave breaking. For which equations are peaked waves solutions? And how stable are they? Anna answers these questions for the reduced Ostrovsky equation, which serves as model for weakly nonlinear surface and internal waves in a rotating ocean. The reduced Ostrovsky equation is a modification of the Korteweg-de Vries equation, for which the usual linear dispersive term with a third-order derivative is replaced by a linear nonlocal integral term, representing the effect of background rotation. Peaked periodic waves of this equation are known to exist since the late 1970's. Anna presented recent results in which she answers the long standing open question whether these solutions are stable. In particular, she proved linear instability of the peaked periodic waves using semi-group theory and energy estimates. Moreover, she showed that the peaked wave is unique and that the equation does not admit Hölder continuous solutions, which implies that the reduced Ostrovsky equation does not admit cusps. Finally, it turns out that the peaked wave is also spectrally unstable. This is joint work with Dmitry Pelinovsky. For the stability analysis it is really delicate how to choose the right spaces such that their norms measure the behaviour of the solution. The Camassa-Holm equation allows for solutions with peaks which are stable with respect to certain perturbations and unstable with respect to others, and can model breaking waves. Anna studied mathematics in Vienna. Adrian Constantin attracted her to the topic of partial differential equations applied to water waves. She worked with him during her PhD which she finished in 2013. Then she worked as Postdoc at the Universitat Autònoma de Barcelona and in Vienna before she accepted a tenure track position in Delft in 2017. References
Related Podcasts
| |||
09 Jan 2020 | Linear Sampling | 00:47:40 | |
In den nächsten Wochen bis zum 20.2.2020 möchte Anna Hein, Studentin der Wissenschaftskommunikation am KIT, eine Studie im Rahmen ihrer Masterarbeit über den Podcast Modellansatz durchführen. Dazu möchte sie gerne einige Interviews mit Ihnen, den Hörerinnen und Hörern des Podcast Modellansatz führen, um herauszufinden, wer den Podcast hört und wie und wofür er genutzt wird. Die Interviews werden anonymisiert und werden jeweils circa 15 Minuten in Anspruch nehmen. Für die Teilnahme an der Studie können Sie sich bis zum 20.2.2020 unter der Emailadresse studie.modellansatz@web.de bei Anna Hein melden. Wir würden uns sehr freuen, wenn sich viele Interessenten melden würden. In the coming weeks until February 20, 2020, Anna Hein, student of science communication at KIT, intends to conduct a study on the Modellansatz Podcast within her master's thesis. For this purpose, she would like to conduct some interviews with you, the listeners of the Modellansatz Podcast, to find out who listens to the podcast and how and for what purpose it is used. The interviews will be anonymous and will take about 15 minutes each. To participate in the study, you can register with Anna Hein until 20.2.2020 at studie.modellansatz@web.de . We would be very pleased if many interested parties would contact us. This is the first of three conversation recorded Conference on mathematics of wave phenomena 23-27 July 2018 in Karlsruhe. Gudrun talked to Fioralba Cakoni about the Linear Sampling Method and Scattering. The linear sampling method is a method to reconstruct the shape of an obstacle without a priori knowledge of either the physical properties or the number of disconnected components of the scatterer. The principal problem is to detect objects inside an object without seeing it with our eyes. So we send waves of a certain frequency range into an object and then measure the response on the surface of the body. The waves can be absorbed, reflected and scattered inside the body. From this answer we would like to detect if there is something like a tumor inside the body and if yes where. Or to be more precise what is the shape of the tumor. Since the problem is non-linear and ill posed this is a difficult question and needs severyl mathematical steps on the analytical as well as the numerical side. In 1996 Colton and Kirsch (reference below) proposed a new method for the obstacle reconstruction problem in inverse scattering which is today known as the linear sampling method. It is a method to solve the above stated problem, which scientists call an inverse scattering problem. The method of linear sampling combines the answers to lots of frequencies but stays linear. So the problem in itself is not approximated but the interpretation of the response is. The central idea is to invert a bounded operator which is constructed with the help of the integral over the boundary of the body. Fioralba got her Diploma (honor’s program) and her Master's in Mathematics at the University of Tirana. For her Ph.D. she worked with George Dassios from the University of Patras but stayed at the University of Tirana. After that she worked with Wolfgang Wendland at the University of Stuttgart as Alexander von Humboldt Research Fellow. During her second year in Stuttgart she got a position at the University of Delaware in Newark. Since 2015 she has been Professor at Rutgers University. She works at the Campus in Piscataway near New Brunswick (New Jersey). References
Podcasts
| |||
16 Jan 2020 | Pattern Formation | 00:30:07 | |
In den nächsten Wochen bis zum 20.2.2020 möchte Anna Hein, Studentin der Wissenschaftskommunikation am KIT, eine Studie im Rahmen ihrer Masterarbeit über den Podcast Modellansatz durchführen. Dazu möchte sie gerne einige Interviews mit Ihnen, den Hörerinnen und Hörern des Podcast Modellansatz führen, um herauszufinden, wer den Podcast hört und wie und wofür er genutzt wird. Die Interviews werden anonymisiert und werden jeweils circa 15 Minuten in Anspruch nehmen. Für die Teilnahme an der Studie können Sie sich bis zum 20.2.2020 unter der Emailadresse studie.modellansatz@web.de bei Anna Hein melden. Wir würden uns sehr freuen, wenn sich viele Interessenten melden würden. In the coming weeks until February 20, 2020, Anna Hein, student of science communication at KIT, intends to conduct a study on the Modellansatz Podcast within her master's thesis. For this purpose, she would like to conduct some interviews with you, the listeners of the Modellansatz Podcast, to find out who listens to the podcast and how and for what purpose it is used. The interviews will be anonymous and will take about 15 minutes each. To participate in the study, you can register with Anna Hein until 20.2.2020 at studie.modellansatz@web.de . We would be very pleased if many interested parties would contact us. This is the second of three conversation recorded Conference on mathematics of wave phenomena 23-27 July 2018 in Karlsruhe. An everyday application is the following: If one puts a pan with a layer of oil on the hot oven (in order to heat it up) one observes different flow patterns over time. In the beginning it is easy to see that the oil is at rest and not moving at all. But if one waits long enough the still layer breaks up into small cells which makes it more difficult to see the bottom clearly. This is due to the fact that the oil starts to move in circular patterns in these cells. For the problem this means that the system has more than one solutions and depending on physical parameters one solution is stable (and observed in real life) while the others are unstable. In our example the temperature difference between bottom and top of the oil gets bigger as the pan is heating up. For a while the viscosity and the weight of the oil keep it still. But if the temperature difference is too big it is easier to redistribute the different temperature levels with the help of convection of the oil. The question for engineers as well as mathematicians is to find the point where these convection cells evolve in theory in order to keep processes on either side of this switch. In theory (not for real oil because it would start to burn) for even bigger temperature differences the original cells would break up into even smaller cells to make the exchange of energy faster. In 1903 Benard did experiments similar to the one described in the conversation which fascinated a lot of his colleagues at the time. The equations where derived a bit later and already in 1916 Lord Rayleigh found the 'switch', which nowadays is called the critical Rayleigh number. Its size depends on the thickness of the configuration, the viscositiy of the fluid, the gravity force and the temperature difference. Only in the 1980th it became clear that Benards' experiments and Rayleigh's analysis did not really cover the same problem since in the experiment the upper boundary is a free boundary to the surrounding air while Rayleigh considered fixed boundaries. And this changes the size of the critical Rayleigh number. For each person doing experiments it is also an observation that the shape of the container with small perturbations in the ideal shape changes the convection patterns. Maria does study the dynamics of nonlinear waves and patterns. This means she is interested in understanding processes which change over time. Her main questions are:
She treats her problems with the theory of dynamical systems and bifurcations. The simplest tools go back to Poincaré when understanding ordinary differential equations. One could consider the partial differential equations to be the evolution in an infinite dimensional phase space. Here, in the 1980s, Klaus Kirchgässner had a few crucial ideas how to construct special solutions to nonlinear partial differential equations. It is possible to investigate waterwave problems which are dispersive equations as well as flow problems which are dissipative. Together with her colleagues in Besancon she is also very keen to match experiments for optical waves with her mathematical analysis. There Mariana is working with a variant of the Nonlinear Schrödinger equation called Lugiato-Lefever Equation. It has many different solutions, e.g. periodic solutions and solitons. Since 2002 Mariana has been Professor in Besancon (University of Franche-Comté, France). Before that she studied and worked in a lot of different places, namely in Bordeaux, Stuttgart, Bucharest, Nice, and Timisoara. References
Podcasts
| |||
06 Feb 2020 | Waveguides | 00:31:31 | |
This is the third of three conversation recorded during the Conference on mathematics of wave phenomena 23-27 July 2018 in Karlsruhe. The spectral theory is essential to study wave phenomena. For instance, everybody has experimented with resonating frequencies in a bathtube filled with water. These resonant eigenfrequencies are eigenvalues of some operator which models the flow behaviour of the water. Eigenvalue problems are better known for matrices. For wave problems, we have to study eigenvalue problems in infinite dimension. Like the eigenvalues for a finite dimensional matrix the Spectral theory gives access to intrinisic properties of the operator and the corresponding wave phenomena. Anne-Sophie is interested in waveguides. For example, optical fibres can guide optical waves while wind instruments are guides for acoustic waves. Electromagnetic waveguides also have important applications. Anne-Sophie uses complex analysis for that. The idea is to complexify the (originally real) coordinates by analytic extension. It is a classic idea for resonances that she adapts to the problem of transmission. Finally, Anne-Sophie is able to get numerically a complex spectrum of frequencies, related to the quality of the transmission in a perturbed waveguide. The imaginary part of the complex quantity gives an indication of the quality of the transmission in the waveguide. The closer to the real axis the better the transmission. References
Podcasts
| |||
27 Feb 2020 | Photoacoustic Tomography | 00:45:28 | |
In March 2018 Gudrun had a day available in London when travelling back from the FENICS workshop in Oxford. She contacted a few people working in mathematics at the University College London (ULC) and asked for their time in order to talk about their research. In the end she brought back three episodes for the podcast. This is the second of these conversations. Gudrun talks to Marta Betcke. Marta is associate professor at the UCL Department of Computer Science, member of Centre for Inverse Problems and Centre for Medical Image Computing. She has been in London since 2009. Before that she was a postdoc in the Department of Mathematics at the University of Manchester working on novel X-ray CT scanners for airport baggage screening. This was her entrance into Photoacoustic tomography (PAT), the topic Gudrun and Marta talk about at length in the episode. PAT is a way to see inside objects without destroying them. It makes images of body interiors. There the contrast is due to optical absorption, while the information is carried to the surface of the tissue by ultrasound. This is like measuring the sound of thunder after lightning. Measurements together with mathematics provide ideas about the inside. The technique combines the best of light and sound since good contrast from optical part - though with low resolution - while ultrasound has good resolution but poor contrast (since not enough absorption is going on). In PAT, the measurements are recorded at the surface of the tissue by an array of ultrasound sensors. Each of that only detects the field over a small volume of space, and the measurement continues only for a finite time. In order to form a PAT image, it is necessary to solve an inverse initial value problem by inferring an initial acoustic pressure distribution from measured acoustic time series. In many practical imaging scenarios it is not possible to obtain the full data, or the data may be sub-sampled for faster data acquisition. Then numerical models of wave propagation can be used within the variational image reconstruction framework to find a regularized least-squares solution of an optimization problem. Assuming homogeneous acoustic properties and the absence of acoustic absorption the measured time series can be related to the initial pressure distribution via the spherical mean Radon transform. Integral geometry can be used to derive direct, explicit inversion formulae for certain sensor geometries, such as e.g. spherical arrays. At the moment PAT is predominantly used in preclinical setting, to image tomours and vasculature in small animals. Breast imaging, endoscopic fetus imaging as well as monitoring of perfusion and drug metabolism are subject of intensive ongoing research. The forward problem is related to the absorption of the light and modeled by the wave equation assuming instanteneous absorption and the resulting thearmal expansion. In our case, an optical ultrasound sensor records acoustic waves over time, i.e. providing time series with desired spacial and temporal resolution. Given complete data, then one can mathematically reverse the time direction and find out the original object. Often it is not possible to collect a complete data due to e.g. single sided access to the object as in breast imaging or underlying dynamics happening on a faster rate than one can collect data. In such situations one can formulate the problem in variational framework using regularisation to compensate for the missing data. In particular in subsampling scenario, one would like to use raytracing methods as they scale linearly with the number of sensors. Marta's group is developing flexible acoustic solvers based on ray tracing discretisation of the Green's formulas. They cannot handle reflections but it is approximately correct to assume this to be true as the soundspeed variation is soft tissue is subtle. These solvers can be deployed alongside with stochastic iterative solvers for efficient solution of the variational formulation. Marta went to school in Poland. She finished her education there in a very selected school and loved math due to a great math teacher (which was also her aunt). She decidede to study Computer Sciences, since there she saw more chances on the job market. When moving to Germany her degree was not accepted, so she had to enrol again. This time for Computer Sciences and Engineering at the Hamburg University of Technology. After that she worked on her PhD in the small group of Heinrich Voss there. She had good computing skills and fit in very well. When she finished there she was married and had to solve a two body problem, which brought the couple to Manchester, where a double position was offered. References
Podcasts
| |||
27 Jan 2022 | Allyship | 00:53:23 | |
One of the reasons we started this podcast in 2013 was to provide a more realistic picture of mathematics and of the way mathematicians work. On Nov. 19 2021 Gudrun talked to Stephanie Anne Salomone who is Professor and Chair in Mathematics at the University of Portland. She is also Director of the STEM Education and Outreach Center and Faculty Athletic Representative at UP. She is an Associate Director of Project NExT, a program of the Mathematical Association of America that provides networking and professional development opportunities to mathematics faculty who are new to our profession. She is a wife and mother of three boys, Milo (13), Jude (10), and Theodore (8). The important question is: How is it possible to educate men and especially powerful white men to become allies? The idea of this first workshop designed by Stephanie and Stan was to invite men already interested in learning more and to build a basis with the documentary Picture a scientist (2020). SYNOPSISPICTURE A SCIENTIST chronicles the groundswell of researchers who are writing a new chapter for women scientists. Biologist Nancy Hopkins, chemist Raychelle Burks, and geologist Jane Willenbring lead viewers on a journey deep into their own experiences in the sciences, ranging from brutal harassment to years of subtle slights. Along the way, from cramped laboratories to spectacular field stations, we encounter scientific luminaries - including social scientists, neuroscientists, and psychologists - who provide new perspectives on how to make science itself more diverse, equitable, and open to all. (from the webpage) In this film there are no mathematicians, but the situations in sciences and mathematics are very similar and for that it lends itself to show the situation. In the podcast conversation Gudrun and Stephanie talk about why and in what way the documentary spoke to them. The huge and small obstacles in their own work as women mathematicians which do not make them feel welcome in a field they feel passionate about. The film shows what happens to women in Science. It shows also men in different roles. Obviously there are the bullies. Then there are the bystanders. There are universities which allow women to be hired and give them the smallest space available. But there are also men who consider themselves friends of their female collaegues who cannot believe that they did not notice how the behaviour of other men (and their own behavior in not taking a side). Seeing this play out over the course of the film is not a comfortable watch, and perhaps because of this discomfort, we hope to build empathy. On the other hand, there is a story of women scientists who noticed that they were not treated as well as their male colleagues and who found each other to fight for office space and the recognition of their work. They succeded a generation ago. The general idea of the workshop was to start with the documentary and to talk about different people and their role in the film in order to take them as prototypical for roles which we happen to observe in our life and which we might happen to play. This discussion in groups was moderated and guided in order to make this a safe space for everyone. Stephanie spoke about how we have to let men grow into their responsibility to speak out against a hostile atmosphere at university created mostly by men. In the workshop it was possible to first develop and then train for possible responses in situations which ask for men stepping in as an ally. The next iteration of the workshop Picture a Mathematician will be on May 11. Literature and further information
Science 368, Issue 6497, 1317-1319 (2020). https://doi.org/10.1126/science.aba7377
| |||
01 Jun 2022 | Spectral Geometry | 00:40:36 | |
Gudrun talks with Polyxeni Spilioti at Aarhus university about spectral geometry. Before working in Aarhus Polyxeni was a postdoctoral researcher in the group of Anton Deitmar at the University of Tübingen. She received her PhD from the University of Bonn, under the supervision of Werner Mueller after earning her Master's at the National and Technical University of Athens (Faculty of Applied Mathematics and Physics). As postdoc she was also guest at the MPI for Mathematics in Bonn, the Institut des Hautes Etudes Scientifiques in Paris and the Oberwolfach Research Institute for Mathematics. In her research she works on questions like: How can one obtain information about the geometry of a manifold, such as the volume, the curvature, or the length of the closed geodesics, provided that we can study the spectrum of certain differential operators? Harmonic analysis on locally symmetric spaces provides a powerful machinery in studying various invariants, such as the analytic torsion, as well as the dynamical zeta functions of Ruelle and Selberg. References and further information
Podcasts
|