Calculus of Variations and Geometric Measure Theory

H. Liao - A. R. Mészáros - C. Mou - C. Zhou

Convergence analysis of controlled particle systems arising in deep learning: from finite to infinite sample size

created by mészáros on 08 Apr 2024

[BibTeX]

Submitted Paper

Inserted: 8 apr 2024
Last Updated: 8 apr 2024

Year: 2024

Abstract:

This paper deals with a class of neural SDEs and studies the limiting behavior of the associated sampled optimal control problems as the sample size grows to infinity. The neural SDEs with $N$ samples can be linked to the $N$-particle systems with centralized control. We analyze the Hamilton-Jacobi-Bellman equation corresponding to the $N$-particle system and establish regularity results which are uniform in $N$. The uniform regularity estimates are obtained by the stochastic maximum principle and the analysis of a backward stochastic Riccati equation. Using these uniform regularity results, we show the convergence of the minima of objective functionals and optimal parameters of the neural SDEs as the sample size $N$ tends to infinity. The limiting objects can be identified with suitable functions defined on the Wasserstein space of Borel probability measures. Furthermore, quantitative algebraic convergence rates are also obtained.


Download: