Recent & Upcoming Talks

Explaining Deep Neural Networks Through Inverse Classification

This talk advances DNN interpretability via inverse classification by introducing frameworks and algorithms for structured, sparse, plausible adversarial/counterfactual examples—developing group-wise attacks via nonconvex proximal methods, efficient data-aligned counterfactuals via accelerated proximal gradients with non-smooth ℓₚ regularization, and training on such counterfactuals that improves robustness, fairness, and generalization—thereby unifying explanation and learning for transparent, reliable models.

Explaining Deep Neural Networks Through Fooling

We explore the brittleness of DNNs through adversarial attacks and counterfactual explanations.

Sparse and Plausible Counterfactual Explanations

We identify three promising approaches to generate sparse and plausible counterfactual explanations.

Minimally Distorted Explainable Adversarial Attacks

We provide a new technique to generate highly sparse and explainable adversarial attacks.

Calculus for Data Science

We revisit essential Calculus theory fundamentals crucial for Data Science applications.

Linear Algebra for Data Science

We revisit essential Linear Algebra theory fundamentals crucial for Data Science applications.

What is Backpropagation?

We revisit the Backpropagation algorithm, widely used by practitioners to train Deep Neural Networks.

Wavelet-based Low Frequency Adversarial Attacks

We provide new insights into vulnerabilities of deep learning models by showing that training-based and basis-manipulation defense methods are significantly less effective if we restrict the generation of adversarial attacks to the low-frequency discrete wavelet transform domain.

Neural Network Approximation Theory

We review classical and modern results in Neural Network Approximation Theory.