Calculus of Variations and Geometric Measure Theory

A. Figalli - X. Fernández-Real

The Continuous Formulation of Shallow Neural Networks as Wasserstein-Type Gradient Flows

created by figalli on 07 Oct 2022
modified on 16 Sep 2024

[BibTeX]

Lecture Notes

Inserted: 7 oct 2022
Last Updated: 16 sep 2024

Journal: Published, Springer, Cham
Year: 2022

Abstract:

It has been recently observed that the training of a single hidden layer artificial neural network can be reinterpreted as a Wasserstein gradient flow for the weights for the error functional. In the limit, as the number of parameters tends to infinity, this gives rise to a family of parabolic equations. This survey aims to discuss this relation, focusing on the associated theoretical aspects appealing to the mathematical community and providing a list of interesting open problems.


Download: