Calculus of Variations and Geometric Measure Theory

F. P. Maiale - A. Trofimova - Arturo De Marinis

Approximation bounds for norm constrained deep neural networks

created by maiale on 07 Jan 2026

[BibTeX]

preprint

Inserted: 7 jan 2026

Year: 2025

ArXiv: 2512.20422 PDF

Abstract:

This paper studies the approximation capacity of neural networks with an arbitrary activation function and with norm constraint on the weights. Upper and lower bounds on the approximation error of these networks are computed for smooth function classes. The upper bound is proven by first approximating high-degree monomials and then generalizing it to functions via a partition of unity and Taylor expansion. The lower bound is derived through the Rademacher complexity of neural networks. A probabilistic version of the upper bound is also provided by considering neural networks with randomly sampled weights and biases. Finally, it is shown that the assumption on the regularity of the activation function can be significantly weakened without worsening the approximation error, and the approximation upper bound is validated with numerical experiments.