Weinan E

Princeton University
Princeton, NJ 08544-1000 U.S.A.
Phone: (609)258-3683 ~ Fax: (609)258-1735
weinan@math.princeton.edu


Slides of the talk at the SIAM-CSE meeting, "AI for Science and Its Implication to Mathematics"
Slides of the talk at the NeurIPS AI for Science workshop, "AI for Science"
Slides of the talk at the Woudschoten Conference, "Bridging Traditional and Machine Learning-Based Algorithms for Solving Partial Differential Equations: The Random Feature Method"
Slides of the ICML keynote lecture, "Towards a Mathematical Theory of Machine Learning"
Slides of the ICM plenary lecture, "A Mathematical Perspective of Machine Learning"
Slides of the talk at the 100th anniversary of Professor Feng Kang, "Machine Learning and Computational Mathematics"
Slides of the talk at MSML2020, "Towards a Mathematical Understanding of Machine Learning: what we know and what we don't"
Slides of the talk at IPAM, "Machine Learning-Based Multiscale Modeling"
Slides of the talk at Monterey, "Deep learning based algorithms for high dimensional PDEs and control"
Slides of the talk at ICIAM 2019, "Machine Learning: Mathematical Theory and Scientific Applications"

Perspectives

Weinan E, "The dawning of a new era in applied mathematics" , Notice of the American Mathematical Society, April, 2021.

Recent review articles

Weinan E, "Machine learning and computational mathematics" , 2020.
Weinan E, Chao Ma, Stephan Wojtowytsch, and Lei Wu, "Towards a Mathematical Understanding of Neural Network-Based Machine Learning: what we know and what we don’t" , 2020.
Weinan E, Jiequn Han and Arnulf Jentzen, "Algorithms for Solving High Dimensional PDEs: From Nonlinear Monte Carlo to Machine Learning" , 2020.
Weinan E, Jiequn Han and Linfeng Zhang, "Integrating Machine Learning with Physics-Based Modeling" , 2020.

Current Research Interests and Highlights:

(1) Machine learning from a mathematical perspective


Continuous formulation of machine learning (papers 10, 17, 34)
In particular, paper 10 proposed an ODE-based machine learning model, now more popularly known as the "Neural ODE".

Monte Carlo-like estimates of the generalization error for shallow and deep neural network models (papers 28, 29, 30)

Analysis of the stochastic gradient descent algorithm using SDEs (papers 2, 6, 7)

Maximum principle-based training algorithms for deep neural networks (paper 19)

(2) Machine learning for problems in scientific computing


The first machine learning-based algorithms for solving high dimensional control problems (paper 4).

The first machine learning-based algorithms for solving high dimensional nonlinear PDEs (papers 8, 9, 15)

(3) Integrating machine learning with physics-based modeling


End-to-end neural network models for inter-atomic potentials (the Deep Potential) and molecular dynamics (Deep Potential based molecular dynamics or DeePMD) (papers 14, 16, 22).

Concurrent learning algorithm for automatically generating the data and the machine learning-based model for inter-atomic potentials (DP-GEN) (papers 24).

Machine learning-based moment closure hydrodynamic model for kinetic equations (paper 32).

Recent Publications List:

77. Hongkang Yang, Zehao Lin, Wenjin Wang, Hao Wu, Zhiyu Li, Bo Tang, Wenqiang Wei, Jinbo Wang, Zeyun Tang, Shichao Song, Chenyang Xi, Yu Yu, Kai Chen, Feiyu Xiong, Linpeng Tang, Weinan E, "Memory^3 : Language Modeling with Explicit Memory" , 2024.
76. Mingze Wang and Weinan E, "Understanding the expressive power and mechanisms of transformer for sequence modeling," arxiv.org/pdf/2402.00522 , 2024.
75. Duo Zhang, Xinzijian Liu, Xiangyu Zhang, Chengqian Zhang, Chun Cai, Hangrui Bi, Yiming Du, Xuejian Qin, Jiameng Huang, Bowen Li, Yifan Shan, Jinzhe Zeng, Yuzhi Zhang, Siyuan Liu, Yifan Li, Junhan Chang, Xinyan Wang, Shuo Zhou, Jianchuan Liu, Xiaoshan Luo, Zhenyu Wang, Wanrun Jiang, Jing Wu, Yudi Yang, Jiyuan Yang, Manyi Yang, Fu-Qiang Gong, Linshuang Zhang, Mengchao Shi, Fu-Zhi Dai, Darrin M. York, Shi Liu, Tong Zhu, Zhicheng Zhong, Jian Lv, Jun Cheng, Weile Jia, Mohan Chen, Guolin Ke, Weinan E, Linfeng Zhang, Han Wang, "DPA-2: Towards a universal large atomic model for molecular and material simulation" , 2023.
74. Qiangqiang Gu, Zhanghao Zhouyin, Shishir Kumar Pandey, Peng Zhang, Linfeng Zhang, Weinan E, "DeePTB: A deep learning-based tight-binding approach with ab initio accuracy" , 2023.
73. Yixiao Chen, Linfeng Zhang, Weinan E and Roberto Car, "Hybrid Auxiliary Field Quantum Monte Carlo for Molecular Systems", arxiv.org/pdf/2211.10824 , 2022.
72. Pinchen Xie, Roberto Car, Weinan E, "Ab Initio Generalized Langevin Equation", arxiv.org/pdf/2211.06558 , 2022.
71. Xuanxi Zhang, Jihao Long, Wei Hu, Weinan E, Jiequn Han, "Initial Value Problem Enhanced Sampling for Closed-Loop Optimal Control Design with Deep Neural Networks" , 2022.
70. Jingrun Chen, Xurong Chi, Weinan E and Zhouwang Yang, "Bridging Traditional and Machine Learning-based Algorithms for Solving PDEs: The Random Feature Method" , 2022.
69. Pinchen Xie, Yixiao Chen, Weinan E and Roberto Car, "Ab initio multi-scale modeling of ferroelectrics: The case of PbTiO3" , 2022.
68. Wei Hu, Jihao Long, Yaohua Zang, Weinan E, Jiequn Han, "Solving optimal control of rigid-body dynamics with collisions using the hybrid minimum principle" , 2022.
67. Tongqi Wen, Linfeng Zhang, Han Wang, Weinan E and David J Srolovitz, "Deep potentials for materials science" , Mater. Futures 1 (2022) 022601.
66. Weinan E, Jiequn Han, and Jihao Long, "Empowering Optimal Control with Machine Learning: A Perspective from Model Predictive Control" , 2022.
65. Yaohua Zang, Jihao Long, Xuanxi Zhang, Wei Hu, Weinan E, Jiequn Han, "A Machine Learning Enhanced Algorithm for the Optimal Landing Problem" , 2022.
64. Lidong Fang, Pei Ge, Lei Zhang, Huan Lei and Weinan E, "DeePN2: A deep learning-based non-Newtonian hydrodynamic model" , 2021.
63. Jiequn Han, Yucheng Yang, and Weinan E, "DeepHAM: A Global Solution Method for Heterogeneous Agent Models with Aggregate Shocks" , 2021.
62. Lulu Zhang, Tao Luo, Yaoyu Zhang, Weinan E, Zhi-Qin John Xu, Zheng Ma, "MOD-Net: A Machine Learning Approach via Model-Operator-Data Network for Solving PDEs" , 2021.
61. Linfeng Zhang, Han Wang, Maria Carolina Muniz, Athanassios Z. Panagiotopoulos, Roberto Car, Weinan E, "A deep potential model with long-range electrostatic interactions" , 2021.
60. Hongkang Yang and Weinan E, "Generalization Error of GAN from the Discriminator's Perspective" , 2021.
59. Linfeng Zhang, Han Wang, Roberto Car and Weinan E, "Phase Diagram of a Deep Potential Water Model" , Phys. Rev. Lett. 126, 236001 – Published 9 June, 2021.
Comment on Physics Synopsis "An Efficient Way to Predict Water’s Phases" .
58. Jihao Long, Jiequn Han and Weinan E, "An L2 Analysis of Reinforcement Learning in High Dimensions with Kernel and Neural Network Approximation" , 2021.
57. Dongdong Wang, Linfeng Zhang, Han Wang, Weinan E, "Efficient sampling of high-dimensional free energy landscapes using adaptive reinforced dynamics" , 2021.
56. Qingcan Wang and Weinan E, "The Expectation-Maximization Algorithm for Continuous-time Hidden Markov Models" , 2021.
55. Weinan E, Stephan Wojtowytsch, "On the emergence of tetrahedral symmetry in the final and penultimate layers of neural network classifiers" , 2020.
54. Weinan E, Stephan Wojtowytsch, "Some observations on partial differential equations in Barron and multi-layer spaces" , 2020.
53. Hongkang Yang and Weinan E, "Generalization and memorization: The bias potential model" , 2020.
52. Weinan E, Stephan Wojtowytsch, "A priori estimates for classification problems using neural networks" , 2020.
51. Yucheng Yang, Yue Pang, Guanhua Huang, and Weinan E, "The Knowledge Graph for Macroeconomic Analysis with Alternative Big Data" , 2020.
50. Yucheng Yang, Zhong Zheng, and Weinan E, "Interpretable Neural Networks for Panel Data Analysis in Economics" , 2020.
49. Chao Ma, Lei Wu and Weinan E, "A Qualitative Study of the Dynamic Behavior of Adaptive Gradient Algorithms" , 2020.
48. Zhong Li, Jiequn Han, Weinan E, and Qianxiao Li, "On the Curse of Memory in Recurrent Neural Networks: Approximation and Optimization Analysis" , 2020.
47. Haijun Yu, Xinyuan Tian, Weinan E, and Qianxiao Li, "OnsagerNet: Learning Stable and Interpretable Dynamics using a Generalized Onsager Principle" , 2020.
46. Chao Ma, Lei Wu and Weinan E, "The Slow Deterioration of the Generalization Error of the Random Feature Model" , 2020.
45. Yixiao Chen, Linfeng Zhang, Han Wang and Weinan E, "DeePKS: a comprehensive data-driven approach towards chemically accurate density functional theory" , 2020.
44. Weinan E, Stephan Wojtowytsch, "On the Banach spaces associated with multi-layer ReLU networks: Function representation, approximation theory and gradient descent dynamics" , 2020.
43. Pinchen Xie, Weinan E, "Coarse-grained spectral projection (CGSP): A scalable and parallelizable deep learning-based approach to quantum unitary dynamics" , 2020.
42. Chao Ma, Lei Wu, Weinan E, "The Quenching-Activation Behavior of the Gradient Descent Dynamics for Two-layer Neural Network Models" , 2020.
41. Weinan E, Stephan Wojtowytsch, "Representation formulas and pointwise properties for Barron functions" , 2020.
40. Stephan Wojtowytsch, Weinan E, "Can Shallow Neural Networks Beat the Curse of Dimensionality? A mean field training perspective" , 2020.
39. Weinan E, Stephan Wojtowytsch, "Kolmogorov Width Decay and Poor Approximators in Machine Learning: Shallow Neural Networks, Random Feature Models and Neural Tangent Kernels" , 2020.
38. Yixiao Chen, Linfeng Zhang, Han Wang, Weinan E, "Ground state energy functional with Hartree-Fock efficiency and chemical accuracy" , 2020.
37. Weile Jia, Han Wang, Mohan Chen, Denghui Lu, Jiduan Liu, Lin Lin, Roberto Car, Weinan E, Linfeng Zhang, "Pushing the limit of molecular dynamics with ab initio accuracy to 100 million atoms with machine learning" , 2020.
36. Huan Lei, Lei Wu and Weinan E, "Machine learning based non-Newtonian fluid model with molecular fidelity" , 2020.
35. Weinan E, Chao Ma and Lei Wu, "On the Generalization Properties of Minimum-norm Solutions for Over-parameterized Neural Network Models" , 2019.
34. Weinan E, Chao Ma and Lei Wu, "Machine Learning from a Continuous Viewpoint" , 2019.
33. Weinan E and Yajun Zhou, "A mathematical model for linguistic universals" , 2019.
32. Jiequn Han, Chao Ma, Zheng Ma, Weinan E, "Uniformly Accurate Machine Learning Based Hydrodynamic Models for Kinetic Equations" , 2019.
31. L.F. Zhang, M.H. Chen, X.F. Wu, H. Wang, W. E and R. Car, "Deep neural networks for Wannier function centers" , 2019.
30. W. E, C. Ma and L. Wu "Barron spaces and compositional function spaces for neural network models" , 2019.
29. W. E, C. Ma and Q.C. Wang "A priori estimates of the population risk for residual networks'' , 2019.
28. W. E, C. Ma and L. Wu "A priori estimates for two layer neural networks" , 2019.
27. W. E, C. Ma, Q.C. Wang and L. Wu "Analysis of the Gradient Descent Algorithm for a Deep Neural Network Model with Skip-connections" , 2019.
26. W. E, C. Ma and L. Wu "A Comparative Analysis of the Optimization and Generalization Property of Two-layer Neural Network and Random Feature Models Under Gradient Descent Dynamics" , 2019.
25.5. Hsin-Yu Ko, Linfeng Zhang, Biswajit Santra, Han Wang, Weinan E, Robert A. DiStasio Jr., Roberto Car " Isotope Effects in Liquid Water via Deep Potential Molecular Dynamics" Molecular Physics, to appear, 2019.
25. L. Zhang, D.-Y. Lin, H. Wang, R. Car and W. E, "Active learning of uniformly accurate interatomic potentials for materials simulation'' Phys. Rev. Materials 3, 023804 – Published 25 February 2019.
24. Jiequn Han, Linfeng Zhang, Weinan E, "Solving Many-Electron Schrodinger Equation Using Deep Neural Networks'' Journal of Computational Physics 399, 108929 , 2019.
23. L. Wu, C. Ma and W. E, "How SGD Selects the Global Minima in Over-parameterized Learning: A Stability Perspective" NIPS, 2018.
22. L. Zhang, J. Han, H. Wang, W. Saidi, R. Car and W. E, "End-to-end Symmetry Preserving Inter-atomic Potential Energy Model for Finite and Extended Systems'' NIPS, 2018.
21. L. Zhang, J. Han, H. Wang, R. Car and W. E, "DeepPCG:Constructing coarse-grained models via deep neural networks'' J. Chem. Phys. 149, 034101 (2018); https://doi.org/10.1063/1.5027645.
20. L. Zhang, W. E and L. Wang "Monge-Ampere flow for generative modeling'' arxiv.org/abs/1809.10188 2018.
19. Q. Li, L. Chen, C. Tai and W. E, "Maximum Principle Based Algorithms for Deep Learning'' JMLR, vol.18, no.165, pp.1-29, 2018, https://arxiv.org/pdf/1710.09513v1.pdf.
18. C. Ma. J.C. Wang and W. E, "Model reduction with memory and machine learning of dynamical systems'' Comm. Comput. Phys., vol.25, no.4, pp. 947-962, 2019,
17. W. E, J. Han an Q. Li, "A Mean-Field Optimal Control Formulation of Deep Learning'' Research in Mathematical Sciences, vol. 6, no.10, 2018.
16. L. F. Zhang, J. Han, R. Car, H. Wang and W. E, "Deep Potential Molecular Dynamics: A scalable model with the accuracy of quantum mechanics'' Phys. Rev. Lett., vol. 120, no. 14, pp.143001, 2018.
15. J. Han, A. Jentzen and W. E, "Solving high-dimensional partial differential equations using deep learning'' Proc. Natl. Acad. Sci., vol. 115, no. 34, pp. 8505-8510, 2018.
14. J. Han, L. F. Zhang, R. Car and W. E, "Deep Potential: A General Representation of a Many-Body Potential Energy Surface'' Comm. Comput. Phys., vol. 23, no. 3, pp. 629-639, 2018
13. W. E and Q.C. Wang "Exponential convergence of the deep neural network approximation for analytic functions '' Science China Mathematics, vol. 61, Issue 10, pp 1733–1740, 2018
12. L. F. Zhang, H. Wang and W. E, "Reinforced dynamics for the enhanced sampling in large atomic and molecular systems. I. Basic Methodology'' J. Chem. Phys., vol. 148, pp.124113, 2018.
11. H. Wang, L. F. Zhang, J. Han and W. E, "DeePMD-kit: A deep learning package for many-body potential energy representation and molecular dynamics'' Comput. Phys. Comm., vol. 228, pp. 178-184, 2018.
10. W. E, "A Proposal on Machine Learning via Dynamical Systems'' Comm. Math. Stat., vol.5, no.1., pp.1-11, 2017.
9. W. E and B. Yu, "The Deep Ritz method: A deep learning-based numerical algorithm for solving variational problems'' Comm. Math. Stats., vol. 6, no. 1, pp. 1-12. 2018.
8. W. E, J. Han and A. Jentzen, "Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations'' Comm. Math. Stats., vol. 5, no. 4, pp. 349-380, 2017.
7. Q. Li, C. Tai and W. E, "Stochastic modified equations and the dynamics of stochastic gradient algorithms'' JMLR, vol. 20, no. 40, pp. 1-47, 2019.
6. Q. Li, C. Tai and W. E, "Stochastic modified equations and adaptive stochastic gradient algorithms'' ICML, 2017.
5. W. E and Y. Wang,
"Optimal convergence rates of the universal approximation error'' Research in Mathematical Sciences,vol. 4, no. 2, 2017.
4. J. Han and W. E, "Deep learning approximation for stochastic control problems'' arxiv.org/abs/1611.07422, NIPS Workshop on Deep Reinforcement Learning, 2016.
3. C. Tai and W. E, "Multi-scale adaptive representation of signals, I'' JMLR, vol.17, no. 140, pp. 1-38, 2016.
2. Q. Li, C. Tai and W. E, "Dynamics of stochastic gradient algorithms'' , 2016
1. C. Tai, T. Xiao, X. Wang and W. E, "Convolutional neural networks with low-rank regularization'' ICLR, 2016.

Multi-level Picard method:

Weinan E, Martin Hutzenthaler, Arnulf Jentzen and Thomas Kruse, "Multilevel Picard iterations for solving smooth semilinear parabolic heat equations'' , 2016.

Research:

My current work focuses on the mathematical theory of machine learning and integrating machine learning with multi-scale modeling.

Research summary: My work draws inspiration from various disciplines of sciences and has made an impact in fluid dynamics, chemistry, material sciences, and soft condensed matter physics. I have contributed to the resolution of some long standing scientific problems such as the Burgers turbulence problem (which was the original motivation of Burgers for proposing the well-known Burgers equation), the Cauchy-Born rule for crystalline solids (which indeed dates back to Cauchy, and provides a microscopic foundation for the elasticity theory), and the moving contact line problem (which is still largely open). A common theme is to try bringing clarity to scientific issues through mathematics. A second theme is multi-scale and/or multi-physics problems. I have also worked on building the mathematical framework and finding effective numerical algorithms for modeling rare events which is a very difficult class of problems involving multiple time scales (string method, minimum action methods, transition path theory, etc). I have also worked on multiscale analysis and algorithms (e.g. the heterogeneous multi-scale method) for stochastic simulation algorithms, homogenization problems, problems with multiple time scales, complex fluids, etc. My book (Principles of Multi-Scale Modeling, Cambridge Univ Press) provides a broad introduction to this subject. A third theme is to develop and analyze algorithms in general. In computational fluid mechanics, I was involved in analyzing and developing vorticity-based methods, the project method and the gauge method. In density functional theory (DFT), my collaborators and I have developed the PEXSI algorithm, which is so far the most efficient algorithm for DFT.

Microscopic Mechanisms of Equilibrium Melting of a Solid
The microscopic mechanism of the melting process of simple solids has been an outstanding issue for a long time. Lindemann and Max Born each proposed his own version of melting criterion. Classical nucleation theory also gives a prediction about the melting pathway. So far direct detailed theoretical or experimental investigation of this process has not been possible. However, with the advent of advanced simulation algorithms (free energy sampling mehods, string methods for computing transition pathways, etc), it is now possible to study these problems computationally.

  • PEXSI Webpage
  • Electronic structure analysis, using for example density functional theory, is at the core of material science and chemistry. It is also among the most challenging problems in computational science. We developed the PEXSI algorithm (pole expansion + selected inversion) which, for the first time, has brought the computational complexity of density functional theory from cubic scaling to quadratic scaling for general three dimensional systems. This algorithm has been implemented in SIESTA.

    Here are some examples of the work I have been involved with (click on the ``+'' sign to read more):

    Burgers turbulence
    We have analyzed the statistical properties of solutions to the Burgers equation with random initial data and random forcing. This series of work provided answers to some of the questions that Burgers proposed back in the early 20th century, and resolved some of controversies concerning the asymptotics of the probability distribution functions for the random forced Burgers equation.
    From quantum and molecular mechanics to macroscopic theories of solids (Cauchy-Born rule and related topics)

    The objective here is to understand solids at the level of quantum mechanics or molecular mechanics. As a by-product, we give a rigorous derivation of the macroscopic continuum models of solids. A key ingredient in this analysis is to understand the various levels of stability conditions (quantum, classical but at the atomic level and classical but at the macro level).

    Stochastic PDEs
    We have developed a new way of studying stochastic PDEs, by viewing the stationary solutions as functionals of the stochastic forcing. This has led to a very elegant description of the stationary solutions of the stochastic Burgers equation and the stochastic passive scalar equation as well as the ergodicity of the stochastic Navier-Stokes equation.
    Modeling rare events
    My work on modeling rare events (joint with Weiqing Ren and Eric Vanden-Eijnden) has centered around developing the string method, which is now quite popular in cmputational chemistry and begins to get popularity in material science, as well as the transition path theory, which is a general theoretical framework for analyzing transition events in complex systems.
    Multiscale methods
    We have developed the framework of the heterogeneous multiscale method (HMM). HMM has led to very promising applications to stochastic simulation algorithms, ODEs with multiple time scales, and many other areas. It also provides a very nice framework for analyzing multiscale methods.
    Soft condensed matter physics
    We have developed the first general nonlinear model for smectic A liquid crystals and used it to study the interesting filamentary structures arising in isotropic-smectic phase transition. We have also developed models for the dynamics of membranes and polymer phase separations that are consistent with thermodynamics. In addition, we have developed models for general inhomogeneous liquid crystal polymer systems using the one-particle probability distribution function as the order parameter.
    Computational fluid dynamics
    Jian-Guo Liu and I addressed long time controversies in vorticity boundary conditions and the numerical boundary layers for the projection method.
    A posteriori error estimates
    In my master degree thesis completed in 1985 under the supervision of Prof. Huang Hongci, I established some of the earliest results on a posteriori error estimates for finite element methods. I introduced the Clement interpolation technique, and proved upper and lower bounds for local error estimators.
    Weak KAM theory
    Under the influence of Jurgen Moser, I independently (of Fathi) developed the weak KAM theory. This was one of the first application of PDE methods to the study of dynamical systems. The most interesting aspect is to study the implication of weak solutions of the Hamilton-Jacobi equation to Hamiltonian systems. This gives an alternative (and much simplified) viewpoint for the Aubry-Mather theory.
    Numerical algorithms for Kohn-Sham density functional theory
    W. E and Jianchun Wang, "A thermodynamic study of the two-dimensional pressure-driven channel flow", Discrete and Continuous Dynamical Systems,vol.36, no. 8, pp. 4349-4366, 2016.
    Q. Li and W. E, "The free action for non-equilbirium systems'', J. Stat. Phys., vol. 161, no. 2, 300-325, 2015.
    Other topics I have made contributions to include: Onsager's conjecture on the energy conservation for weak solutions of the 3D Euler's equation, homogenization and two-scale convergence, singularity formation in solutions of Prandtl's equation, Ginzburg-Landau vortices, micromagnetics and the Landau-Lifshitz equation, stochastic resonance, etc.

  • String Method Webpage
  • HMM webpage
  • Contract All | Expand All

    Analysis and algorithms for multiscale problems

    Mathematical theory of solids at the atomic and macroscopic scales

    The main objective is to develop a rigorous mathematical theory for solids. This requires understanding models of solids at the electronic, atomistic and continuum level, as well as the relation between these models. Problems of interest include: (1). The crystallization problem: Why solids take the form of crystal lattice at zero temperature? (2). The Cauchy-Born rule, which serves as a connection between atomistic and continuum models of solids.


    Electronic structure, density functional theory

    The main objective is to understand the mathematical foundation of electronic structure analysis, to develop and analysis efficient algorithms.


    General issues in multiscale modeling

    Problems with multiple time scales

    Stochastic chemical kinetic systems

    Multiscale modeling of solids

    Multiscale modeling of complex fluids

    Multiscale methods for multiscale PDEs

    The moving contact line problem and micro-fluidics

    Homogenization theory

    Analysis and modeling of stochastic problems

    Analysis of stochastic partial differential equations

    Rare events: String method, minimum action method and transition path theory

    Stochastic chemical kinetic systems

    ``Burgers turbulence'' and passive scalar turbulence

    General issues in stochastic modeling

    Other topics

    Incompressible flow: Projection methods, vorticity-based methods and gauge methods

    A posterior error estimates

    Work done in Master degree thesis, under the guidance of Professor Hongci Huang at the Chinese Academy of Sciences. The main focus is on finite element for problems with corner singularities. Issues discussed include: A posterior error estimates, direct and inverseerror estimates on locally refined domains, convergence of multi-grid methods on such domains, etc.


    Miscellaneous topics
    Euler equations, boundary layer problem, Aubry-Mather theory, micromagnetics and the Landau-Lifshitz equation, vortex dynamcis in Ginzburg-Landau theory
    Micromagnetics and Landau-Lifshitz equation

    Ginzburg-Landau vortices

    Selected Review Papers