Calculus of Variations and Geometric Measure Theory

A. Figalli - X. Fernández-Real

The continuous formulation of shallow neural networks as wasserstein-type gradient flows

created by figalli on 07 Oct 2022


Lecture Notes

Inserted: 7 oct 2022
Last Updated: 7 oct 2022

Journal: Preprint
Year: 2020


It has been recently observed that the training of a single hidden layer artificial neural network can be reinterpreted as a Wasserstein gradient flow for the weights for the error functional. In the limit, as the number of parameters tends to infinity, this gives rise to a family of parabolic equations. This survey aims to discuss this relation, focusing on the associated theoretical aspects appealing to the mathematical community and providing a list of interesting open problems.