Applied Mathematics and Machine Learning
Department of Mathematics and Statistics
Texas Tech University
In this talk we will present a technique to derive degenerate equation for compressible fluids which does not use Darcy's postulate. The method is based on Einstein's integral conservation law for a scalar parameter characterizing number of molecules in unit volume. The method is then used to derive degenerate parabolic equation in non-divergent form for ideal gases in porous media.
In the next step we utilize Landis's ideas to prove that if the Cauchy problem solution has compact support at reference time, then at any time after the designated reference time, it will have compact support.
Proposed method is inspired by much more general result obtained by Tedeeev and Vespri.
The simulation of high-dimensional problems with manageable computational resource represents a long standing challenge. In a series of our recent work, a class of sparse grid DG methods has been formulated for solving various types of partial differential equations in high dimensions. By making use of the multiwavelet tensor-product bases on sparse grids in conjunction with the standard DG weak formulation, such a novel method is able to significantly reduce the computation and storage cost compared with full grid DG counterpart, while not compromising accuracy much for sufficiently smooth solutions. In this work, we consider the high-dimensional Helmholtz equation with variable coefficients and demonstrate that for such a problem the efficiency of the sparse grid DG method can be further enhanced by exploring a semi-orthogonality property associated with the multiwavelet bases, motivated by the work of C. Pflaum. The detailed convergence analysis shows that the modified sparse grid DG method attains the same order accuracy, but the resulting stiffness matrix is much sparser than that by the original method.When modeling the real world, there are three immediate reasons to add a noise so that a deterministic equation becomes stochastic. Firstly, an external force (e.g. wind if modeling waves) is naturally random. Secondly, adding a noise turns the statement on the solution to be probabilistic, and thereby informally excuses the model for not being precise or complete. Thirdly, classical deterministic PDEs apply only to macroscopic scales, and stochastic PDEs model the microscopic scales much better. Behind such motivations, quite a lot of literature has been devoted to PDEs with noise which is white in time. However, for many decades physicists and engineers have actually favored forcing by a noise that is white in both space and time. Their applications were somewhat groundless because with the classical mathematical theory, solutions to such stochastic PDEs forced by space-time white noise, singular PDEs, were ill-defined. It is only due to the breakthrough results in the last five years that we now have theorems on the solutions to such singular PDEs, which we can rigorously prove.This is a continuation from the previous week on the topic of singular PDEs.In this talk, we will illustrate an application of mixed models on infectious disease modeling, i.e. HIV studies.Exponential time differencing (ETD) has been proven to be very effective for solving stiff evolution problems in the past decades due to rapid development of matrix exponential algorithms and computing capacities. While direct parallelization of the ETD methods is rarely of good efficiency due to the required data communication, the localized exponential time differencing approach was recently introduced for extreme-scale phase field simulations of coarsening dynamics, which displayed excellent parallel scalability in modern supercomputers. The main idea is to use domain decomposition techniques to reduce the size of the problem, so that one instead only solves a group of smaller-sized
subdomain problems simultaneously using the locally computed products of matrix exponentials and vectors. With the diffusion equation as the model problem, we will develop and analyze some overlapping and nonoverlapping localized ETD schemes and their solution algorithms. Numerical experiments are also carried out to confirm the theoretical results. This work is to serve as the first step toward building a solid mathematical foundation for localized ETD methods.Optimal control and optimal design problems governed by partial differential equations (PDEs) arise in many engineering and science applications. In these applications one wants to maximize the performance of the system subject to constraints. When problem data, such as material parameters, are not known exactly but are modeled as random fields, the system performance is a random variable. So-called risk measures are applied to this random variable to obtain the objective function for PDE constrained optimization under uncertainty. Instead of only maximizing expected performance, risk-averse optimization also considers the deviation of actual performance below expected performance. The resulting optimization problems are difficult to solve because a single objective function evaluation requires sampling of the governing PDE at many parameters, risk-averse optimization requires sampling in the tail of the distribution, and many risk measures introduce non-smoothness into the optimization.
This talk demonstrates the impact of risk-averse optimization formulations on the
solution and illustrates the difficulties that arise in solving risk-averse optimization
problems. New sampling schemes are introduced that exploit the structure of risk measures
and use reduced order models to identify the small regions in parameter space which are important for the optimization. Modifications of Newton's method are introduced to
address difficulties arising from the non-smoothness.
It is shown that these improvements substantially reduce the cost of solving risk-averse optimization problems.
A new numerical approach based on the minimization of the local truncation error is suggested for the solution of partial differential equations; see [1]. Similar to the finite difference method, the form and the width of the stencil equations are assumed in advance. A discrete system of equations includes regular uniform stencils for internal points and non-uniform stencils for the points close to the boundary. The unknown coefficients of the discrete system are calculated by the minimization of the order of the local truncation error. The main advantages of the new approach are a high accuracy and the simplicity of the formation of a discrete (semi-discrete) system for irregular domains. For the regular uniform stencils, the stencil coefficients can be found analytically. For non-uniform cut stencils, the stencil coefficients are numerically calculated by the solution of a small system of linear algebraic equations (20-100 algebraic equations). In contrast to the finite elements, there is no necessity to calculate by integration the elemental mass and stiffness matrices that is time consuming for high-order elements. As a mesh, the grid points of a uniform Cartesian mesh as well as the points of the intersection of the boundary of a complex domain with the horizontal, vertical and diagonal lines of the uniform Cartesian mesh are used; i.e., in contrast to the finite element meshes, a trivial mesh is used with the new approach. Changing the width of the stencil equations, different high-order numerical techniques can be developed. Currently the new technique is applied to the solution of the wave, heat, Helmholtz and Laplace equations. The theoretical and numerical results show that for the width of the stencil equations equivalent to that for the linear quadrilateral finite elements, the new technique yields the fourth-order of accuracy for the numerical results on irregular domains for the considered partial differential equations (it is much more accurate compared with the linear and high-order finite elements at the same number of degrees of freedom). 3-D numerical examples on irregular domains show that at accuracy of 5%, the new approach reduces the number of degrees of freedom by a factor of greater than 1000 compared to that for the linear finite elements with similar stencils. This leads to a huge reduction in computation time for the new approach at a given accuracy. This reduction in computation time will be even greater if a higher accuracy is needed; e.g., 1% or less.Deep learning has been successfully applied to many high-dimensional problems including computer vision, speech recognition, and numerical PDEs. In this talk, we first instroduce explicit error characterization of deep network approximation. Second, we present connections between deep network approximation, Monte Carlo sampling, random orthogonal projection, and Kolmogorov–Arnold representation theorem on the curse of dimensionality. These connections leads to new error estimates for the approximation of multivariate functions by deep networks, for which the curse of the dimensionality is lessened.We propose a first order energy stable linear semi-implicit method for solving the Allen-Cahn-Ohta-Kawasaki equation. By introducing a new nonlinear term in the Ohta- Kawasaki free energy functional, all the system forces in the dynamics are localized near the interfaces which results in the desired hyperbolic tangent profile. In our numerical method, the time discretization is done by some stabilization technique in which some extra nonlocal but linear term is introduced and treated explicitly together with other linear terms, while other nonlinear and nonlocal terms are treated implicitly. The spatial discretization is performed by the Fourier collocation method with FFT-based fast implementations. The energy stabilities are proved for this method in both semi-discretization and full discretization levels. Numerical experiments indicate the force localization and desire hyperbolic tangent profile due to the new nonlinear term. We test the first order temporal convergence rate of the proposed scheme. We also present hexagonal bubble assembly as one type of equilibrium for the Ohta-Kawasaki model. Additionally, the two-third law between the number of bubbles and the strength of long-range interaction is verified which agrees with the theoretical studies.In this talk, we propose a unified analysis of Bregman proximal first-order algorithms for convex minimization. This flexible and versatile class of algorithms includes many well-known gradient-based schemes such as gradient descent, projected gradient descent, and proximal gradient descent. This algorithmic class offers enormous potential to tackle large-scale optimization problems arising in data science and a variety of disciplines. Our approach, which depends on the Fenchel conjugate, yields novel proofs of the convergence rates of the Bregman proximal subgradient and gradient algorithms, and a new accelerated Bregman proximal gradient algorithm. We illustrate the effectiveness of Bregman proximal methods on two problems of great interest in data science, namely the D-optimal design and Poisson linear inverse problems.
The speaker will introduce a new numerical method for first-order transport problems by using the primal-dual weak Galerkin (PD-WG) finite element method recently developed in scientific computing. The PD-WG method is based on a variational formulation of the modeling equation for which the differential operator is applied to the test function so that low regularity for the exact solution of the original equation is sufficient for computation. The PD-WG finite element method indeed yields a symmetric system involving both the original equation for the primal variable and its dual for the dual variable (also known as Lagrangian multiplier). For the linear transport problem, it is
shown that the PD-WG method offers numerical solutions that conserve mass locally on each element. Optimal order error estimates in various norms are derived for the numerical solutions arising from the PD-WG method with weak regularity assumptions on the modelling equations. A variety of numerical results are presented to demonstrate the accuracy and stability of the new method.