Channel Avatar

One world theoretical machine learning @UCz7WlgXs20CzugkfxhFCNFg@youtube.com

None subscribers - no pronouns set

This is the youtube channel of the one world seminar series


48:10
Hao Ni - Path development network for sequential data analysis
48:34
Simon Du - How Over-Parameterization Slows Down Gradient Descent
49:33
Micah Goldblum - Bridging the Gap between Deep Learning Theory and Practice
54:19
Lei Wu - Understanding the implicit bias of SGD: A dynamical stability perspective
51:24
Bobak Kiani: On the hardness of learning under symmetries
01:00:05
Hemant Tyagi - Dynamic ranking and translation synchronization
46:41
Nicolas Boulle - Elliptic PDE learning is provably data-efficient
49:21
Yuan Cao - Understanding Deep Learning Through Phenomena Discovery and Explanation
44:50
Mufan Li - Infinite-Depth Neural Networks as Depthwise Stochastic Processes
50:35
Aditya Varre - On the spectral bias of two-layer linear networks
49:54
Shuyang Ling - Neural collapse phenomenon for unconstrained feature model with imbalanced datasets
55:37
Marius Zeinhofer - Error Analysis and Optimization Methods for Scientific Machine Learning
58:30
Keaton Hamm - Manifold Learning in Wasserstein Space
39:07
Tan Nguyen - Transformers Meet Image Denoising: Mitigating Over-smoothing in Transformers
51:17
Lisa Kreusser - Unveiling the role of the Wasserstein Distance in Generative Modelling
49:20
Ting Lin - Universal Approximation and Expressive Power of Deep Neural Networks
51:29
Theo Bourdais - Computational Hypergraph Discovery, a Gaussian Process framework
53:25
Sebastian Goldt - Gaussian world is not enough: Analysing neural nets beyond Gaussian models of data
39:07
Tan Nguyen - Transformers Meet Image Denoising: Mitigating Over-smoothing in Transformers
45:37
Jakwang Kim - Understanding adversarial robustness via optimal transport perspective
46:59
Zhiqin Xu - Simple bias in deep learning
45:10
Matthieu Darcy - Kernel methods for operator learning
46:25
Liyuan Liu - Bridge Discrete Variables with Back-Propagation and Beyond
42:03
Ekaterina Rapinchuk - A Fast Graph-Based Classification Method with Applications to 3D Sensory Data
51:11
Christian Fiedler - Reproducing kernel Hilbert spaces in the mean field limit
01:00:35
Jean-Francois Aujol - FISTA is a geometrically optimized algorithm for strongly convex functions
32:29
Somdatta Goswami - Transfer Learning in Physics-Based Applications with Deep Neural Operators
46:41
Yaoqing Yang - Predicting & improving generalization by measuring loss landscapes & weight matrices
01:06:07
Bertrand Gauthier - Energy-driven sampling for PSD-matrix low-rank approximation
48:00
Ying Jin - Prediction-Assisted Screening and Discovery with Conformal p-values
53:00
Kevin Miller - Ensuring Exploration and Exploitation in Graph-Based Active Learning
53:40
Jose Gallego-Posada - Controlled Sparsity via Constrained Optimization
57:03
Kathryn Lindsey - Images and fibers of the realization map for feedforward ReLU neural networks
43:24
Anirbit Mukherjee - Provable Training of Neural Nets With One Layer of Activation
41:24
Deanna Needell - Using Algebraic Factorizations for Interpretable Learning
41:54
Daniel Cremers - Self-Supervised Learning for 3D Shape Analysis
56:39
Elisenda Grigsby - Functional dimension of ReLU Networks
42:06
Rebekka Burkholz - Pruning Deep Neural Networks for Lottery Tickets
44:12
Soufiane Hayou - Principled scaling of deep neural networks
53:36
Anna Little - Unbiasing Procedures for Scale-invariant Multi-reference Alignment
56:12
Francis Bach - Information theory through kernel methods
01:01:36
Simone Brugiapaglia - foundations of deep learning: from rating impossibility to existence theorems
41:26
Johannes Brandstetter - Towards a new generation of neural PDE surrogates
58:22
Marcus Hutter - Testing Independence of Exchangeable Random Variables
54:26
Leon Bungert - Uniform convergence rates for infinity Laplacian equations on graphs
01:03:53
Sophie Langer - Circumventing the curse of dimensionality with deep neural networks
01:04:35
Peter Richtarik - The Resolution of a Question Related to Local Training in Federated Learning
58:28
Gal Vardi - Implications of the implicit bias in neural networks
01:00:05
Denny Wu - High-dimensional asymptotics of feature learning in the early phase of NN training
57:48
Stephan Mandt - Compressing Variational Bayes: From neural data compression to video prediction
50:28
Gregory Schwartzman - SGD Through the Lens of Kolmogorov Complexity
46:50
Robin Walters - Symmetry in Neural Network Parameters and Non-Linearities
56:57
Tyrus Berry - Beyond Regression: Operators and Extrapolation in Machine Learning
46:42
Matthew Colbrook - Smale’s 18th Problem and the Barriers of Deep Learning
56:06
Hongyang Zhang - Understanding and improving generalization in multitask and transfer learning
50:21
Houman Owhadi - Computational Graph Completion
46:38
Loucas Pillaud-Vivien - the role of noise in non-convex machine learning dynamics
48:37
Alessandro Scagliotti - Deep Learning Approximation of Diffeomorphisms via Linear-Control Systems
01:01:35
Chris Budd and Simone Appella - R-Adaptivity, Deep Learning and Optimal Transport
56:09
Soheil Kolouri - Wasserstein Embeddings in the Deep Learning Era