Neural Network Approximation Theory

Abstract

We review classical and modern results in neural network approximation theory. First, the density of neural networks within different function spaces under various assumptions on the activation function is considered. Next, lower and upper bounds on the order of approximation with neural networks are given based on the input dimension, the number of neurons and a parameter quantifying the smoothness of the target function. Lastly, a family of compositional target functions for which the curse of dimensionality can be overcome using deep neural networks is examined.

Date
Mar 4, 2021 10:30 AM — 11:00 AM
Location
Berlin, Germany
Shpresim Sadiku
PhD @TUBerlin/BMS
Scientific Assistant @ZuseInstitute