Events
Department of Mathematics and Statistics
Texas Tech University
N/A
N/A
The purpose of this talk is to give one example of a regularizing property of noise in a partial differential equation. The Hall-magnetohydrodynamic system forced by Levy noise is just one example.
Global well-posedness of a partial differential equation in a certain Banach space informally implies particularly that given any initial data in that Banach space, there exists a unique solution in that Banach space for all time. When a partial differential equation is not known to be globally well-posed, the second best result may be the ``small initial data result;'' i.e. there exists a positive constant such that for any initial data in that Banach space with its norm less than this constant, the unique solution exists for all time. It is well known that for non-diffusive equations in fluid mechanics, such a ``small initial data'' is very difficult to obtain. In particular, obtaining such a ``small initial data'' result for deterministic Euler equations is a completely open problem, to the best of the speaker's knowledge. Nevertheless, if you force with a certain noise, then for such non-diffusive equations, including the Euler equations, the ``small initial data results'' are attainable. Therefore, a noise (random forcing term) can possibly display a regularizing property for its solution. In this talk I will explain the idea behind this phenomenon.
Modern classification and regression tasks depend on powerful techniques and models from machine learning. Despite their predictive power on live inputs, these models exhibit remarkable vulnerability to small perturbations. Indeed, a smart �adversary�, perhaps just nature behaving mischievously, can usually contort images, sounds, or other data in a small, human imperceptible fashion that causes a machine learner to incorrectly predict or classify it. Salient yet surprising examples of this phenomenon include pedestrians suddenly crossing the street in front of a self-driving car, anomalous defects in manufactured goods, and tricking iPhone�s facial recognition. In this talk, we�ll introduce the basics of neural networks then discuss the mathematics of generating and defending against adversarial examples. The defense portion will emphasize adversarial training, the training time robustification of learners against adversarial examples. | Wednesday Nov. 20 3:00 PM Math 111
| | Algebra and Number Theory No Seminar
|
In this talk, we propose a unified analysis of Bregman proximal first-order algorithms for convex minimization. This flexible and versatile class of algorithms includes many well-known gradient-based schemes such as gradient descent, projected gradient descent, and proximal gradient descent. This algorithmic class offers enormous potential to tackle large-scale optimization problems arising in data science and a variety of disciplines. Our approach, which depends on the Fenchel conjugate, yields novel proofs of the convergence rates of the Bregman proximal subgradient and gradient algorithms, and a new accelerated Bregman proximal gradient algorithm. We illustrate the effectiveness of Bregman proximal methods on two problems of great interest in data science, namely the D-optimal design and Poisson linear inverse problems.
Math Circle Fall Poster