Lecture Notes
Inserted: 7 oct 2022
Last Updated: 16 sep 2024
Journal: Published, Springer, Cham
Year: 2022
Abstract:
It has been recently observed that the training of a single hidden layer artificial neural network can be reinterpreted as a Wasserstein gradient flow for the weights for the error functional. In the limit, as the number of parameters tends to infinity, this gives rise to a family of parabolic equations. This survey aims to discuss this relation, focusing on the associated theoretical aspects appealing to the mathematical community and providing a list of interesting open problems.
Download: