Colloquia
Department of Mathematics and Statistics
Texas Tech University
Quantitative data science approaches, including mathematical modeling, statistics and more recently machine learning, have been crucial in understanding characteristics of SARS-CoV-2 transmission since the very beginning of the outbreak, and thus providing rational basis for policy decision making. In this talk, I will first present our work on quantifying key epidemiological parameters of SARS-CoV-2 outbreak in Wuhan in late 2019 and early 2020 using limited and noisy data. I will discuss how the rate of spread during the Wuhan outbreak were underestimated in earlier studies, and how we reached the conclusions in early February 2020 that because of the extremely rapid rate of spread, strong social distancing efforts are necessary to stop its transmission (contrary to the expert and dominant opinions at that time). I will then present our recent work on estimating the rate of spread of Omicron in China after the 'zero-COVID' policy was suddenly abandoned at the end of 2022. We showed that because of the extraordinarily rapid spread of Omicron in a naïve population, more than 1 billion people are infected within a few weeks’ time. If time permitting, I will briefly talk about our recent work using machine learning approaches to understand dynamics of SARS-CoV-2 variants.
Object data analysis is an approach to statistics that represents observations in the simplest form possible that contains all relevant characteristics and then utilizes the geometry of the resulting sample space to analyze data. This approach is typically used for data that are non-Euclidean and/or high-dimensional and such data can arise from a wide variety of fields, including medical imaging, computer vision, the geosciences, and bioinformatics. In this talk, I will summarize many of the problems I have been studying over the course of my career on various topics involving object data. These problems focus on the development of statistical models and methodology for analyzing shapes of planar contours, multivariate data, symmetric positive-definite matrices, and directional data. My work on these problems has been fundamentally linked to the training and mentoring of both graduate and undergraduate research students.
The Statistics seminar group encourages attendance of this Departmental T&P Colloquium and may be attended virtually at 3:30 PM (UT-5) via this Zoom link.
Meeting ID: 939 5734 3379
Passcode: Ellingson
I will explain my recent work on functorial field theory, including the proof of the Baez-Dolan cobordism hypothesis and the proof of a conjecture by Freed and Lawrence on locality of fully extended field theories.
The talk will include the necessary background on topology and field theory.
The Geometry and Topology and Quantum Homotopy seminar groups encourage attendance of this Departmental T&P Colloquium and may be attended virtually at 3:00 PM (UT-5) via this Zoom link.
Meeting ID: 956 3178 2128
Passcode: Pavlov
Strategies for selecting hedging measures that both respect certain market values of cash flows and yet maintain control on their distance from physical measures are advocated, proposed and implemented. The hedging criterion is the maximization of a conservative valuation of the hedged position. Such values are modeled as nonlinear expectations based on measure distortions. Measure selections and conservative value maximizing hedges are illustrated for options on SPY and nine sector ETFs.
This Departmental Colloquium is sponsored by Mathematical Finance and may be attended at 2 PM CDT (UT-5) via this Zoom link.
Meeting ID: 998 7917 6042
Passcode: 800990
Physics-Informed Neural Networks (PINNs) have received much attention recently due to their potential for high-performance computations for complex physical systems, including data-based computing, systems with unknown parameters, and others. The idea of PINNs is to approximate the equations and boundary and initial conditions through a loss function for a neural network. PINNs combine the efficiency of data-based prediction with the accuracy and insights provided by the physical models. However, applications of these methods to predict the long-term evolution of systems with little friction, such as many systems encountered in space exploration, oceanography/climate, and many other fields, need extra care as the errors tend to accumulate, and the results may quickly become unreliable.
We provide a solution to the problem of data-based computation of Hamiltonian systems utilizing symmetry methods. Many Hamiltonian systems with symmetry can be written as a Lie-Poisson system, where the underlying symmetry defines the Poisson bracket. For data-based computing of such systems, we design the Lie-Poisson neural networks (LPNets). We consider the Poisson bracket structure primary and require it to be satisfied exactly, whereas the Hamiltonian, only known from physics, can be satisfied approximately. By design, the method preserves all special integrals of the bracket (Casimirs) to machine precision. LPNets yield an efficient and promising computational method for many particular cases, such as rigid body or satellite motion (the case of SO(3) group), Kirchhoff's equations for an underwater vehicle (SE(3) group), and others.
Joint work with Chris Eldred (Sandia National Lab), Francois Gay-Balmaz (CNRS and ENS, France), and Sophia Huraka (U Alberta). The work was partially supported by an NSERC Discovery grant.
This Departmental Colloquium is sponsored by the Applied Math seminar group.
When: 3:00 pm (Lubbock's local time is GMT -5)
Where: room MATH 011 (basement)
ZOOM details:
- Choice #1: use this link
Direct Link that embeds meeting and ID and passcode.
- Choice #2: join meeting using this link
Join Meeting, then you will have to input the ID and Passcode by hand:
* Meeting ID: 968 6501 7586
* Passcode: Applied
Despite the popularity of classical goodness fit tests such as Pearson’s chi-squared and Kolmogorov-Smirnov, their applicability often faces serious challenges in practical applications. For instance, in a binned data regime, low counts may affect the validity of the asymptotic results. Excessively large bins, on the other hand, may lead to loss of power. In the unbinned data regime, tests such as Kolmogorov-Smirnov and Cramer-von Mises do not enjoy distribution-freeness if the models under study are multivariate and/or involve unknown parameters. As a result, one needs to simulate the distribution of the test statistic on a case-by-case basis. In this talk, I will discuss a testing strategy that allows us to overcome these shortcomings and equips experimentalists with a novel tool to perform goodness-of-fit while reducing substantially the computational costs.
I will discuss some recent development in understanding connections between three major subjects in modern set theory: large cardinals, forcing axioms, and determinacy. Large cardinals form a hierarchy of axioms extending the standard axioms of Zermelo-Fraenkel with choice (ZFC) and every known, natural theory can be interpreted by one of the large cardinal axioms. Forcing axioms form another hierarchy generalizing the Baire Category Theorem and are widely used in many applications in other fields such as functional analysis, topology, and more recently C^*-algebra. The Axiom of Determinacy and its generalizations postulate that complicated sets of reals have nice regularity properties and are incompatible with the Axiom of Choice. These axioms have been studied extensively by descriptive set theorists and inner model theorists. I will particularly focus on recent works of Woodin, Steel, Sargsyan, and myself on constructing models of determinacy from large cardinals and from forcing axioms such as the Proper Forcing Axiom (PFA) and Martin's maximum (MM). These works make significant progress on the Inner Model Problem, one of the most central problems in set theory. Some of the work discussed here is supported by NSF CAREER grant DMS-1945592.
We model the effects of a bias constant electric field on a thin sample of a bent-core liquid crystal in the ferroelectric smectic A-type phase by considering a small thickness limit of a three-dimensional phenomenological energy functional. We show that under proper rescaling the electric self-interactions give rise to boundary terms and present numerical simulations based on the reduced model. This is a joint work with Carlos Garcia-Cervera and Sookyung Joo.
This Colloquium is sponsored by the TTU Graduate Student Chapter of SIAM.
In this talk I will survey recent progress in elucidating the subtle yet decisive role geometry plays in the context of boundary value problems for elliptic operators. Questions we will be addressing include:
What kind of analysis can you do on a given geometric setting? How can you analytically quantify features which are inherently geometric, such as flatness or bending, for a given surface?
In the past few years, locating the tumor source has become influential in mathematical oncology, playing an important role in delineating source regions. In this talk, I will focus on two themes within this model.
The first theme centers on the development of a new PDE-based inversion, called Variational Quasi-Reversibility Method, designed to tackle the source localization problem in quasilinear form. This approach relies on a special transformation of the inverse problem, resulting in a forward-like problem where various forward solvers can be applied. I will discuss both the weak solvability and strong convergence of the scheme through energy analysis and present some numerical results.
Expanding the source localization model involves incorporating additional design variables. However, in this advanced scenario, certain forward models, including the age-dependent Gompertz model, remain unsolved. In the second theme, I will present a new explicit Fourier-Klibanov method to deal with this forward problem. Stability analysis of the proposed scheme will be reported along with some numerical results.
Anomaly detection in a dynamic network is one of the most actively developing fields in network sciences and its applications. In this study, we invoke the machinery of topological data analysis (TDA), particularly, persistent homology, and functional data depth to evaluate anomalies in weighted dynamic networks. The proposed method is based on higher dimensional topological structures, in particular, the persistence diagram, and its vector representations. Intuitively, this TDA and Data Depth based method (TDA-Depth) evaluates the patterns and dynamics of the networks at multi-scale levels and systematically distinguishes the regular and abnormal network snapshots in a dynamic network. In synthetic experiments, the TDA-Depth method outperforms the state-of-the-art method. We also evaluate our method on two real weighted dynamic networks. In both datasets, we demonstrate that our method can effectively identify anomalous network snapshots.
This Departmental Candidate Colloquium is at 2:00 PM (UT-6) and may be attended live in ESB1 120, and virtually via this Zoom link.
Meeting ID: 939 5469 1292
Passcode: Dey
The free probability theory was initially designed to study certain class of von Neumann algebras in the theory of operator algebras. It turns out to be the most suitable framework to study the universality laws in random matrix theory due to the groundbreaking work of Voiculescu. These limiting laws are encoded in abstract operators, called free random variables. The interactions between abstract operator algebras and random matrix theory are very fruitful. It opens the door for applications to a wide range of areas including high dimensional statistics, large neural network, wireless communication, and quantum information theory.
In this talk, I will report some exciting progress on limiting distributions of many deformed random matrix models. We calculated the Brown measures of a large family of random variables in free probability theory, which allows us to investigate many new non-Hermitian random matrix models. Our recent works surpassed all existing results on this topic and unified previous methods. We will discuss their applications to random network, high dimensional statistics, and outliers of large random matrix models.
This Departmental Job Candidate Colloquium may be attended virtually at 2:00 PM (UT-6) via this Zoom link.
Understanding and effectively modeling the volatility of speculative assets play a crucial role in making informed investment decisions. As Bitcoin is increasingly perceived as a potential alternative to traditional fiat currencies, its unique volatility characteristics become a focal point for investors. Therefore, it is imperative to gain a comprehensive understanding and employ suitable models to capture the dynamics governing Bitcoin's volatility.
In this presentation, we delve into the analysis of Bitcoin's volatility using Double Subordinated Models. Our primary focus is on introducing a double subordinated Levy process known as the Normal Double Inverse Gaussian (NDIG) to effectively model the time series properties of the cryptocurrency. Additionally, we present an innovative arbitrage-free option pricing model based on the NDIG process, offering a fresh perspective on the valuation of Bitcoin.
Within the framework of this model, we derive two distinct measures of Bitcoin volatility. The first measure involves combining NDIG option pricing with the Chicago Board Options Exchange VIX model, providing an implied volatility measure that reflects the perspectives of options traders. The second measure delves into implied volatility in the practical world, considering the viewpoints of spot traders and utilizing an intrinsic time formulation.
Both volatility measures are systematically compared to a historical standard deviation-based volatility metric. Notably, with appropriate linear scaling, the NDIG process demonstrates a remarkable capability to accurately capture the observed in-sample volatility of Bitcoin. This presentation aims to contribute valuable insights into comprehending and modeling Bitcoin's volatility dynamics using the powerful framework of Double Subordinated Models.
This Departmental Job Candidate Colloquium is sponsored by the Mathematical Finance seminar group and may be attended virtually at 2:00 PM (UT-6) via this Zoom link.
abstract pdf
This Departmental Job Candidate Colloquium is sponsored by the Statistics seminar group and may be attended virtually at 2:00 PM (UT-6) via this Zoom link.
Variational methods have formed the foundation of classical mechanics for several hundred years. In recent years, these methods have become much more powerful through the applications of the geometric approach. In this lecture, I will show how these geometric methods can give deep insights into seemingly disconnected problems using the same mathematical principles. After a general and gentle introduction, I will illustrate this method on the examples of modeling figure skating (a system with nonholonomic constraint) and fluid-structure interactions, in particular, the dynamics of a porous media containing incompressible fluid (two media coupled through the incompressibility constraint). I will also outline the further potential of the method by briefly mentioning new applications of these methods to computations based on physics-based neural networks. I will also discuss the limitations of these methods, i.e., what progress can be achieved by algorithmic thinking alone and at what point ingenuity and creativity must take over.
This Job Candidate Departmental Colloquium is sponsored by the Applied Math seminar group and may be attended in person in ESB1-120 and virtually at 2:00 PM (UT-6) via this Zoom link. This interview will be available in the TTU Mediasite Catalog.
Meeting ID: 947 9013 5459
Passcode: Math