Events
Department of Mathematics and Statistics
Texas Tech University
Tor-persistence is the claim that Tor of a module with itself is only
zero if the module has finite projective dimension. Work done by
Avramov, Iyengar, Nasseh, Sather-Wagstaff, and various other authors
have proven modules over certain rings are Tor-persistent. In this
work, we study Tor-persistence for the generic module over
determinantal rings, specifically for the hypersurface R defined by
the determinant of a generic matrix. We give an explicit proof that
$Tor^R_2(M,M)$ is never zero, where M is the cokernel of the generic
matrix. We will also discuss the more general determinantal rings.
Turbulence occurs in our daily lives. Kolmogorov's zeroth law of turbulence from 1941 was supported by numerical analysis under the name of "anomalous dissipation." Closely related is the famous Onsager's conjecture in 1949. The magnetohydrodynamics (MHD) system has applications in astrophysics, geophysics, and plasma physics, and has been studied since 1940s led by the pioneering works of Alfven. In the community of researchers on MHD turbulence, there is a long-standing belief, namely Taylor's conjecture in 1974, concerning the conservation of the magnetic helicity in the infinite conductivity limit. Via the technique of convex integration, we prove non-uniqueness in law of the 3D MHD system forced by random noise. The solution we construct has properties such that all of its total energy, magnetic helicity, and cross helicity grow twice faster than those constructed via the classical approach, namely the Galerkin approximation. Consequently, it represents a counterexample to Taylor's conjecture in the stochastic setting.
This week's Analysis seminar may be attended at 4:00 PM CDT (UT-5) via this Zoom link.
Meeting ID: 966 0483 8930
Passcode: 871299
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on.
Please virtually attend this week's Statistics seminar at 4:00 PM (UT-5) via this zoom link.
Meeting ID: 935 0068 2496
Passcode: 001385
I will present a new proof of Berwick-Evans, Boavida de Brito, and Pavlov’s theorem that for any smooth manifold A, and any sheaf X on the site of smooth manifolds, the mapping sheaf Hom(A,X) has the correct homotopy type. The talk will focus on the main innovation of this proof, namely the use of test categories to construct homotopical calculi on locally contractible ∞-toposes. With this tool in hand I will explain how a suitable homotopical calculus may be constructed on the ∞-topos of sheaves on the site of smooth manifolds using a new diffeology on the standard simplices due to Kihara. The main theorem follows using a similar argument that for any CW-complex A, and any topological space X the set of continuous maps Hom(A,X) equipped with compact-open topology models the mapping-homotopy-type map(A,X).Abstract. In recent years, there is an increasing interest in applying deep learning to geophysical/medical data inversion. However, direct application of end-to-end data-driven approaches to inversion have quickly shown limitations in the practical implementation. Indeed, due to the lack of prior knowledge on the objects of interest, the trained deep learning neural networks very often have limited generalization. In this talk, we introduce a new methodology of coupling model-based inverse algorithms with deep learning for two typical types of inversion problems. In the first part, we present an offline-online computational strategy of coupling classical least-squares based computational inversion with modern deep learning based approaches for full waveform inversion to achieve advantages that can not be achieved with only one of the components. In the second part, we present an integrated data-driven and model-based iterative reconstruction framework for joint inversion problems. The proposed method couples the supplementary data with the partial differential equation model to make the data-driven modeling process consistent with the model-based reconstruction procedure. We also characterize the impact of learning uncertainty on the joint inversion results for one typical inverse problem.
When: 4:00 pm (Lubbock's local time is GMT -5)
Where: room MATH 011 (basement)
ZOOM details:
- Choice #1: use this link
Direct Link that embeds meeting and ID and passcode.
- Choice #2: join meeting using this link
Join Meeting, then you will have to input the ID and Passcode by hand:
* Meeting ID: 944 4492 2197
* Passcode: applied
 | Thursday Apr. 18 6:30 PM MA 108
| | Mathematics Education Math Circle Hung Tran Department of Mathematics and Statistics, Texas Tech University
|
Math Circle spring poster
abstract noon CDT (UT-5)