Channel Avatar

DataMListic @UCRM1urw2ECVHH7ojJw8MXiQ@youtube.com

43K subscribers

Welcome to DataMListic (former WhyML)! On this channel I exp


09:26
Depth Anything 3:Recovering the Visual Space from Any Views - Paper Walkthrough
04:11
Triangular Matrices and LU Decomposition - Explained
04:13
Nested Learning: The Illusion of Deep Learning Architectures - Paper Walkthrough
04:04
Cracking ML Interviews: Covariance vs Correlation (Question 15)
03:03
The Identity Matrix - Explained
03:03
Cracking ML Interviews: Self-Attention Mechanism (Question 14)
04:28
Rotation and Reflection Matrices - Explained
04:40
Cracking ML Interviews: Precision, Recall and F1-Score (Question 13)
08:08
Orthogonal Matrices - Explained
06:13
Cracking ML Interviews: ROC and AUC (Question 12)
06:22
Symmetric Matrices and the Positive Definiteness
03:12
Cracking ML Interviews: K-Fold Cross-Validation (Question 11)
01:09
Categorical Distribution - ML Snippets
06:44
The Hessian Matrix - Explained
05:30
Cracking ML Interviews: Batch Normalization (Question 10)
02:44
Cracking ML Interviews: L1/L2 Regularization (Question 9)
05:06
Cracking ML Interviews: Skip Connection Layer (Question 8)
04:45
The Jacobian Matrix - Explained
02:36
Cracking ML Interviews: Activation Functions in Neural Nets (Question 7)
01:25
Vector Databases & Vector Search - ML Snippets
05:07
Cracking ML Interviews: Convolutional Neural Networks (Question 6)
08:04
Cracking ML Interviews: Stochastic Gradient Descent (Question 5)
04:03
Cracking ML Interviews: Solving the Linear Regression Equation (Question 4)
03:35
Statistical Moments: Mean, Variation, Skewness, Kurtosis
04:14
Cracking ML Interviews: Underfitting vs Overfitting (Question 3)
01:42
Binomial Distribution - ML Snippets
05:39
Cracking ML Interviews: Maximum Likelihood Estimation (Question 2)
05:44
Cracking ML Interviews: Bayes Theorem Exercise (Question 1)
04:31
Simpson's Paradox - Explained
08:15
Frequentist vs Bayesian Thinking
02:03
L1 Regularization - ML Snippets
08:11
Kernel Density Estimation - Explained
01:09
t-SNE Intuition - ML Snippets
05:10
Accept-Reject Sampling - Explained
08:41
Dirichlet Distribution - Explained
01:47
Kernel Trick - ML Snippets
01:42
Cross-Entropy - ML Snippets
09:17
Markov Chain Monte Carlo (MCMC) - Explained
01:09
Bernoulli Distribution - ML Snippets
02:37
Gaussian Distribution - ML Snippets
05:27
Bayes' Theorem - Explained
01:15
Markov Chains - ML Snippets
04:49
Taylor Series - Explained
03:48
Linear Regression (Least Squared Errors) - Explained
04:13
Monte Carlo Simulation - Explained
04:42
Lagrange Multipliers - Explained
04:03
Variational Autoencoder - Explained
04:41
Central Limit Theorem - Explained
05:27
Degrees of Freedom - Explained
05:57
Qwen3 - Paper Walkthrough
05:31
Gamma Function - Explained
05:35
Variational Inference - Explained
08:01
Forward-Backward Algorithm | Hidden Markov Models Part 3
09:01
Magistral - Paper Walkthrough
08:21
Student's t-Distribution - Explained
03:57
Why Larger Language Models Do In-context Learning Differently? - Paper Walkthrough
08:36
DeepSeek-R1 - Paper Walkthrough
08:02
t-SNE - Explained
08:04
The Illusion of Thinking - Paper Walkthrough
05:18
Perception Encoder - Paper Walkthrough