Location: Department of Mathematics, Princeton University, Fine Hall, Room 314
Dates: August 10 to August 17, 2025
Support: NSF-FRG Collaboration on Fluids and Computer Assisted Proofs
Summer School Schedule
Monday, August 11
09:20-09:30 Opening remarks
09:30-10:20 Ching-Yao Lai (lecture 1)
10:30-11:20 Tristan Buckmaster (lecture 1)
11:30-12:20 Alexandru Ionescu (lecture 1)
12:30-14:15 Welcome lunch for all participants
14:30-15:20 Joel Dahne (lecture 1)
15:30-16:20 Stan Palasek (lecture 1)
Tuesday, August 12
09:30-10:20 Ching-Yao Lai (lecture 2)
10:30-11:20 Tristan Buckmaster (lecture 2)
11:30-12:20 Ching-Yao Lai (lecture 3)
14:30-15:20 Gonzalo Cao-Labora (lecture 1)
15:30-16:00 Coffee social for all participants
Wednesday, August 13
09:30-10:20 Ching-Yao Lai (lecture 4)
10:30-11:20 Alexandru Ionescu (lecture 2)
11:30-12:20 Ching-Yao Lai (lecture 5)
14:30-15:20 Yongji Wang (lecture 1)
15:30-16:00 Coffee social for all participants
Thursday, August 14
09:30-10:20 Alexandru Ionescu (lecture 3)
10:30-11:20 Tristan Buckmaster (lecture 3)
11:30-12:20 Joel Dahne (lecture 2)
12:30-14:15 Joint lunch for all participants
14:30-15:20 Stan Palasek (lecture 2)
Friday, August 15
09:30-10:20 Alexandru Ionescu (lecture 4)
10:30-11:20 Tristan Buckmaster (lecture 4)
11:30-12:20 Gonzalo Cao-Labora (lecture 2)
14:30-15:20 Yongji Wang (lecture 2)
15:30-16:00 Coffee social for all participants
Saturday, August 16
09:30-10:20 Tristan Buckmaster (lecture 5)
10:30-11:20 Alexandru Ionescu (lecture 5)
11:30-12:20 Concluding remarks and discussion
Titles and Abstracts
Tristan Buckmaster: Singularities in Fluid Dynamics
Abstract : This lecture series offers an overview of the mathematical techniques used to understand self-similar blow-up phenomena in fluid dynamics. We'll also explore the integration of modern approaches, such as computer-assisted proofs and neural networks, within this framework.
Gonzalo Cao-Labora: Self-Similar Blowup from Approximate Profiles via Computer-Assisted Proofs
Abstract: We present a rigorous approach for proving singularity
formation, originating from approximate, unstable, self-similar profiles obtained numerically. While we will focus on the Cordoba-Cordoba-Fontelos equation --a system where singularity formation around unstable profiles
remains an open problem-- the strategy and techniques are applicable to different equations. Our approach relies heavily on computer-assisted proofs to rigorously bound the residual of the profile and to understand
the dynamics in its vicinity.
We will first provide an overview of this strategy, and then focus our
attention on the crucial part, which is understanding the linear operator
(L) around a self-similar profile. Notably, in our case, L will exhibit a
finite number of instabilities, so we will want to show exponential decay
of exp(t L) on some finite codimension stability subspace. In order to do
this, we will explain how to select a Hilbert space where the symmetric
component, L^sym can be decomposed into three parts: (1) a strictly
negative form, (2) a finite matrix, and (3) a small error term. We will
explain how the analytical treatment of (1) can be combined with
computer-assisted treatments of (2) and (3). This combined approach allows
us to rigorously conclude the desired results.
Joel Dahne: Self-similar Blowup for the Nonlinear Schrödinger Equation and the Complex Ginzburg-Landau Equation
Abstract: In recent work with Jordi-Luís Figueras we prove the existence of solutions to the Nonlinear Schrödinger Equation and its extension, the Complex
Ginzburg-Landau Equation, that blow up in finite time in a self-similar way. The proof follows a strategy used by Plecháč and Sverak in 2001 to
numerically study these solutions. It reduces the problem to proving the
existence of a solution to a certain ODE with prescribed behavior at zero
and infinity.
In these two lectures we will look at the history of the problem and the
reduction to the ODE. We will then look at how a solution to this ODE can
be constructed using rigorous numerical methods, giving us a proof of the
self-similar blowup.
Alexandru Ionescu: On the Regularity Theory of Incompressible Flows
Abstract: Our goal in this course is to review the regularity theory of the incompressible Euler and Navier-Stokes equations in 2 and 3 dimensions. We will focus on regularity criteria, global regularity results, the concept of criticality, fixed-point arguments, the existence of weak Leray solutions, and the problem of hydrodynamic stability of shear flows and vortices among solutions of the 2D incompressible Euler equations.
Ching-Yao Lai: Basics of Physics-Informed Neural Networks
Abstract: The use of neural networks in numerically solving PDEs is a rapidly growing area in scientific computing. This mini course introduces the background knowledge of neural networks, including the universal approximation property, back propagation, and its powerful extension for finding approximate PDE solutions, e.g., physics-informed neural networks. We will briefly highlight the first example of its use in funding numerical blow-up solutions (Wang-Lai- Gomez-Serrano-Buckmaster, PRL 2023), and the associated opportunities and challenges.
Stan Palasek: Non-Uniqueness for the Navier-Stokes Equations from Critical Data
Abstract: These lectures concern the question of non-uniqueness for the incompressible Navier-Stokes equations. Specifically, we ask whether critical initial data can give rise to non-unique smooth solutions obeying an energy inequality. We will survey some history including the computer-assisted proposal of Jia and Sverak. Then we introduce a new approach that leads to a full resolution of the problem in the context of a dyadic model of the Navier-Stokes. Finally, we present some details of a recent construction (joint with M. Coiculescu) of an infinite energy initial datum at critical regularity for which uniqueness of smooth solutions breaks down.
Yongji Wang: Advanced Techniques for Physics-Informed Neural Networks
Abstract: Over the past several years, Physics-Informed Neural Networks (PINNs) have evolved into a powerful tool for solving complex scientific problem. My two-part lecture series builds upon the foundational concepts introduced by Dr. Ching-Yao Lai to provide an in-depth look at advanced PINN methodologies.
The first lecture will provide a detailed survey of state-of-the-art PINN techniques. The primary focus will be on methods crucial for obtaining high-fidelity solutions to partial differential equations. We will explore advanced architectures and training strategies, including a deep dive into multi-stage neural networks and their role in enhancing model accuracy and convergence.
The second lecture bridges the gap between theory and application. It will feature a live, guided implementation of the advanced methods discussed in the first session. I will work through two non-trivial case studies, applying these techniques to solve complex PDEs from start to finish. The goal is to solidify comprehension and equip students with the practical skills needed to leverage advanced PINNs in their own research.
Participants
- Akshat Agarwal, Princeton University, undergraduate student
- Erik Bahnson, Rutgers University, graduate student
- Sacha Ben-Arous, ENS Paris-Saclay, undergraduate student
- Lydia Boubendir, Princeton University, undergraduate student
- Hyungjun Choi, Princeton University, graduate student
- Matei Coiculescu, Princeton University, graduate student
- Grayson Davis, New York University, graduate student
- Ansh Desai, University of Delaware, undergraduate student
- John Driscoll, University of Massachusetts - Dartmouth, undergraduate student
- Hetian Fu, University of Michigan - Ann Arbor, undergraduate student
- Nivika Gandhi, Columbia University, undergraduate student
- Joshua George, University of Waterloo, graduate student
- Miguel Guadarrama Ayala, McGill University, graduate student
- MyeongSeo Kim, Johns Hopkins University, graduate student
- Olena Kovalenko, Princeton University, undergraduate student
- Chen Li, New York University, graduate student
- Olivia Luo, University of California - Los Angeles, undergraduate student
- Qi Ma, Rutgers University, graduate student
- Leo Mokriski, Emory College of Arts and Sciences, undergraduate student
- Joana Pech-Alberich (Universitat Politecnica de Catalunya, undergraduate student
- Kevin Peng, University of California - Berkeley, undergraduate student
- Mahnav Petersen, University of Chicago, undergraduate student
- Krishna Pothapragada, Massachusetts Institute of Technology, undergraduate student
- Ishaan Sinha, Harvard University, undergraduate student
- Noah Stevenson, Princeton University, graduate student
- Max Sundblad, Matematiska Institutionen Uppsala, undergraduate student
- Alexandria Tan, University of Washington, undergraduate student
- Achyuta Telekicherla-Kandalam, University of Minnesota, undergraduate student
- Annie Wei, Rutgers University, undergraduate student
- Sara Wilson, University of Pittsburgh, undergraduate student
- Xuan Xu, Princeton University, undergraduate student
- Kaiwen Zhang, New York University, graduate student