Channel Avatar

One world theoretical machine learning @UCz7WlgXs20CzugkfxhFCNFg@youtube.com

None subscribers - no pronouns set

This is the youtube channel of the one world seminar series


51:40
Qin Li - Mean field theory in Inverse Problems: From Bayesian inference to overparametrized networks
49:56
Jeremy Budd - Joint Reconstruction-Segmentation with Graph PDEs
57:39
Rama Cont - Asymptotic Analysis of Deep Residual Networks
51:06
Anna Korba - Kernel Stein Discrepancy Descent
55:44
Robert Nowak - What Kinds of Functions Do Neural Networks Learn?
39:42
Yury Korolev - Approximation properties of two-layer neural networks with values in a Banach space
49:24
Po-Ling Loh - Robust W-GAN-Based Estimation Under Wasserstein Contamination
53:48
Matthias Ehrhardt - Bilevel Learning for Inverse Problems
51:52
Ferdia Sherry - Structure-preserving machine learning for inverse problems
58:27
Arthur Gretton - Generalized Energy-Based Models
01:01:37
Andrew Stuart - Learning Linear Operators
49:37
George Karniadakis - Approximating functions, functionals and operators using DNNs
45:40
Tatiana Bubba - Deep neural networks for inverse problems with pseudodifferential operators
48:34
Eldad Haber - PDE Inspired Graph Neural Networks
43:18
Bao Wang - Advances of momentum in optimization algorithms and neural architecture design
52:42
Petar Veličkovć - Geometric Deep Learning: Grids, Graphs, Groups, Geodesics and Gauges
42:14
Sebastian Kassing - Convergence of Stochastic Gradient Descent for analytic target functions
01:03:08
Kaushik Bhattacharya - Learning based multi-scale modeling
52:52
Nik Nuesken: Stein geometry in machine learning: gradient flows, optimal transport, large deviations
56:03
Wuchen Li - Transport information Bregman divergences
48:10
Nicolas Garcia Trillos - Adversarial Classification, Optimal Transport, and Geometric Flows
53:23
Derek Driggs - Barriers to Deploying Deep Learning Models During the COVID-19 Pandemic
48:20
Boumediene Hamzi - Machine Learning and Dynamical Systems meet in Reproducing Kernel Hilbert Spaces
01:02:38
Bubacarr Bah - Discrete Optimization Methods for Group Model Selection in Compressed Sensing
51:37
Jeff Calder - Random walks and PDEs in graph-based learning
58:50
Boris Hanin - Finite Width, Large Depth Neural Networks as Perturbatively Solvable Models
49:41
Nathaniel Trask - Structure preservation and convergence in scientific machine learning
52:11
Frederic Koehler - Classification Under Misspecification
49:37
Andrea Agazzi - Convergence & optimality of single-layer neural networks for reinforcement learning
57:01
Melanie Weber - Geometric Methods for Machine Learning and Optimization
51:15
Carola Bibiane Schönlieb - Machine Learned Regularization for Solving Inverse Problems
49:45
Ziwei Ji - The dual of the margin: improved analyses and rates for gradient descent’s implicit bias
51:23
Nadia Drenska - A PDE Interpretation of Prediction with Expert Advice
43:20
Felix Voigtlaender - Neural networks for classification problems with boundaries of Barron class
01:08:21
Bamdad Hosseini - Conditional Sampling with Monotone GANs: Generative Models and Inverse Problems
52:48
Zhengdao Chen - A Dynamical Central Limit Theorem for Shallow Neural Networks
59:57
Jonas Latz - Analysis of Stochastic Gradient Descent in Continuous Time
53:14
Yu Bai - How Important is the Train-Validation Split in Meta-Learning?
01:00:48
Ryan Murray - Consistency of Cheeger cuts: Total Variation, Isoperimetry, and Clustering
43:09
Qi Lei - Predicting What You Already Know Helps: Provable Self-Supervised Learning
55:41
Jason Klusowski - Sparse Learning with CART
01:07:06
Franca Hoffmann - Geometric Insights into Spectral Clustering by Graph Laplacian Embeddings
47:49
Alex Safsten - Stability of Accuracy for Deep Neural Network Classifiers
48:43
Holden Lee - Provable Algorithms for Sampling Non-log-concave Distributions
01:05:07
Nadav Cohen - Analyzing Optimization and Generalization in DL via Dynamics of Gradient Descent
01:07:29
Lénaïc Chizat - Analysis of Gradient Descent on Wide Two-Layer ReLU Neural Networks
50:54
Lei Wu - Understanding flow-based models: Representation, landscape and gradient flow
01:02:38
Soledad Villar - Dimensionality reduction and matching datasets
44:57
Yiping Lu - Optimization Of Neural Network: A Continuous Depth Limit Point Of View And Beyond
01:04:04
Lukasz Szpruch - Mean-Field Neural ODEs, Relaxed Control and Generalization Errors
57:04
Aditi Raghunathan - Tradeoffs between Robustness and Accuracy
01:06:13
Stephan Wojtowytsch - Banach spaces for multi-layer networks and connections to mean field training
56:39
Huy Tuan Pham - A general framework for the mean field limit of multilayer neural networks
01:05:00
Konstantinos Spiliopoulos - Mean field limits of neural networks: typical behavior and fluctuations
53:11
Roberto I. Oliveira - A mean-field theory for certain deep neural networks
01:15:10
Anders Hansen - On the foundations of computational mathematics and the potential limits of AI
01:11:07
Eric Vanden-Eijnden - Trainability and accuracy of artificial neural networks
01:12:53
Weinan E - Towards a mathematical understanding of supervised learning