Colloquia
Department of Mathematics and Statistics
Texas Tech University
Modeling across scales is a fundamental problem in Applied Mathematics: many systems manifest radically different behavior at different spatial/temporal scales. We discuss a dynamical version of Renormalization Group methodology for bottom-up scaling in the problems of Stochastic Nonlinear Dynamics. Examples include fully developed turbulence, MHD turbulence, and sand pile models in Self-organized Criticality. The inverse, top-down scaling problem (of restoring metrics attributed to different scales that might represent the macroscopic behavior well) is known as Machine Learning. We discuss data/structure scaffolding with stochastic (diffusion) processes, including scale-sensitive (anisotropic & time-fractional) diffusions, and give a brief morphological analysis of the city of Lubbock. Finally, we plan to mention the threshold stochastic models important for modeling survival, evolution, and communication, as well as to surf over the joint project with the AVX Company on “Multi-Source Data Fusion for Aviation Sustainment”. You are cordially welcome!Many important engineering and scientific systems require the solution of time-dependent PDE systems. These sytems often have specific stability needs, such as needing A-stable or L-stable methods, in order to compute realistic solutions. For example, the incompressible Navier-Stokes equations for fluid flow are diferential algebraic equations (DAE), and can be viewed as indefinitely stiff. L-stable time stepping methods can be beneficial in this case.
Certain classes of implicit Runge-Kutta (IRK) methods, such as the Radau I and Radau II methods, provide L-stability. However, one price of using IRK methods is the need to solve large linear systems at each time step. For example, if our PDE has been linearized and discretized with $ N $ degrees of freedom, using an $ s $-stage IRK method leads to an $ sN \times sN $ linear system that must be solved at each time step. We investigate preconditioners for such systems.
N/A
Exponential time differencing (ETD) has been proven to be very effective for solving stiff evolution problems in the past decades due to rapid development of matrix exponential algorithms and computing capacities. While direct parallelization of the ETD methods is rarely of good efficiency due to the required data communication, the localized exponential time differencing approach was recently introduced for extreme-scale phase field simulations of coarsening dynamics, which displayed excellent parallel scalability in modern supercomputers. The main idea is to use domain decomposition techniques to reduce the size of the problem, so that one instead only solves a group of smaller-sized subdomain problems simultaneously using the locally computed products of matrix exponentials and vectors. With the diffusion equation as the model problem, we will develop and analyze some overlapping and nonoverlapping localized ETD schemes and their solution algorithms. Numerical experiments are also carried out to confirm the theoretical results. This work is to serve as the first step toward building a solid mathematical foundation for localized ETD methods.Matthias Heinkenschloss
Department of Computational and Applied Mathematics
Rice University
heinken@rice.edu
“Risk-averse PDE-constrained optimization"
Optimal control and optimal design problems governed by partial differential equations (PDEs) arise in many engineering and science applications. In these applications one wants to maximize the performance of the system subject to constraints. When problem data, such as material parameters, are not known exactly but are modeled as random fields, the system performance is a random variable. So-called risk measures are applied to this random variable to obtain the objective function for PDE constrained optimization under uncertainty. Instead of only maximizing expected performance, risk-averse optimization also considers the deviation of actual performance below expected performance. The resulting optimization problems are difficult to solve because a single objective function evaluation requires sampling of the governing PDE at many parameters, risk-averse optimization requires sampling in the tail of the distribution, and many risk measures introduce non-smoothness into the optimization.
This talk demonstrates the impact of risk-averse optimization formulations on the solution and illustrates the difficulties that arise in solving risk-averse optimization problems. New sampling schemes are introduced that exploit the structure of risk measures and use reduced order models to identify the small regions in parameter space which are important for the optimization. Modifications of Newton's method are introduced to
address difficulties arising from the non-smoothness. It is shown that these improvements substantially reduce the cost of solving risk-averse optimization problems. N/A
N/A