We provide a new technique to generate highly sparse interpretable adversarial attacks.
We revisit the Backpropagation algorithm, widely used by practitioners to train Deep Neural Networks.
We provide new insights into vulnerabilities of deep learning models by showing that training-based and basis-manipulation defense methods are significantly less effective if we restrict the generation of adversarial attacks to the low frequency discrete wavelet transform domain.
We review classical and modern results in Neural Network Approximation Theory.