Calculus of Variations and Geometric Measure Theory

N. Apollonio - D. De Canditiis - G. Franzina - P. Stolfi - G. L. Torrisi

Normal approximation of random Gaussian neural networks

created by franzina on 10 Jul 2023
modified on 28 Sep 2024

[BibTeX]

Accepted Paper

Inserted: 10 jul 2023
Last Updated: 28 sep 2024

Journal: Stochastic Systems
Year: 2024

Abstract:

In this paper we provide explicit upper bounds on some distances between the (law of the) output of a random Gaussian neural network and (the law of) a random Gaussian vector. Our results concern both shallow random Gaussian neural networks with univariate output and fully connected and deep random Gaussian neural networks, with a rather general activation function. The upper bounds show how the widths of the layers, the activation function and other architecture parameters affect the Gaussian approximation of the output. Our techniques, relying on Stein's method and integration by parts formulas for the Gaussian law, yield estimates on distances which are indeed integral probability metrics, and include the total variation and the convex distances. These latter metrics are defined by testing against indicator functions of suitable measurable sets, and so allow for accurate estimates of the probability that the output is localized in some region of the space. Such estimates have a significant interest both from a practitioner's and a theorist's perspective.


Download: