Hello Everyone

Hello Everyone

Hello everyone! Welcome to my blog! I’ll be sharing my journey as a PhD student at the Academy of Mathematics and Systems Sciences (AMSS), Chinese Academy of Sciences (CAS).

This blog isn’t just about mathematics – I’ll also explore topics like economics, finance, and anything else that sparks my interest.

All the cover images on this blog were taken by myself. It is strictly prohibited to reproduce any part of this website without my permission.

If you have any questions or just want to reach out, feel free to email me at wangtuo1020@outlook.com.

Some of the posts on this blog were originally written by me quite a long time ago in other languages. I have used AI to translate them into English before sharing them here. In addition, some of the articles were first handwritten on paper by me and then converted into Markdown code using AI tools before being uploaded. If you notice any mistakes, please feel free to contact me.

Note: If the LaTeX in the text doesn’t display properly, just refresh the page.

The cover image of this article was taken at the Sydney Opera House in Sydney, Australia.

An Introduction to Mean-Field Langevin Dynamics

An Introduction to Mean-Field Langevin Dynamics

Optimization over the space of probability measures is not only widely applicable, but also offers a useful perspective for analyzing certain complicated finite-dimensional nonconvex optimization problems. In particular, lifting such problems to optimization problems over probability measures can lead to better structural properties, such as convexity. Mean-field Langevin dynamics provides a representative example of this idea. Its central motivation is that some highly nonconvex optimization problems arising in neural network training become better behaved when reformulated as the optimization of a functional on the space of probability measures. This viewpoint also makes it possible to build a theoretical foundation for understanding the convergence of SGD. In what follows, we briefly introduce this perspective, mainly based on the paper by [Hu, Kaitong, et al]. The main analytical framework of this theory can be illustrated by Figure 2.

Read more
Optimization over the Space of Probability Measures
Introduction to Flow Matching

Introduction to Flow Matching

In generative modeling, we are given a collection of training samples $\{x_i\}_{i=1}^N$ and wish to generate new samples from the underlying target distribution $\pi$. There are already many established approaches to this problem, including likelihood-based methods, implicit generative models such as GANs, and score-based diffusion models. More recently, the flow matching framework has emerged as another powerful paradigm. In what follows, we introduce the basic ideas of flow matching and explain how works.

Read more
Gradient Flows in Wasserstein Space
The General Theory in Metric Spaces
Introduction to the Metric Setting
Gradient Flows in the Euclidean Space
Monotone Transport Maps and Plans in 1D
Definition of Reproducing Kernel Hilbert Space