Channel Avatar

Aleksa Gordić - The AI Epiphany @UCj8shE7aIn4Yawwbo2FceCQ@youtube.com

52.5K subscribers - no pronouns set

x-Google DeepMind, x-Microsoft engineer explaining AI. ❀️


20:24
How I Got a Job at DeepMind as a Research Engineer (without a Machine Learning Degree!)
01:42
Channel update: moving to London in 2 days, new MLOps series
01:25:47
Coding a Neural Network from Scratch in Pure JAX | Machine Learning with JAX | Tutorial #3
01:08:59
Machine Learning with JAX - From Hero to HeroPro+ | Tutorial #2
01:17:57
Machine Learning with JAX - From Zero to Hero | Tutorial #1
23:32
T0: Multitask Prompted Training Enables Zero-Shot Task Generalization | Paper Explained
24:57
ResNet Strikes Back! | Patches Are All You Need? | Papers Explained
24:36
Fake It Till You Make It (Microsoft) | Paper Explained
16:21
10k subscribers | joining Google DeepMind, updates, AMA
32:54
The Sensory Neuron as a Transformer: Permutation-Invariant Neural Networks for RL | Paper Explained
30:34
DeepMind Perceiver and Perceiver IO | Paper Explained
39:06
ETA Prediction with Graph Neural Networks in Google Maps | Paper Explained
31:35
Neural Search with Jina AI | Open-source ML Tool Explained
21:54
ALiBi | Train Short, Test Long: Attention With Linear Biases Enables Input Length Extrapolation
48:53
Facebook AI's DINO | PyTorch Code Explained
15:22
Fastformer: Additive Attention Can Be All You Need | Paper Explained
34:51
Do Vision Transformers See Like Convolutional Neural Networks? | Paper Explained
28:54
DeepMind DetCon: Efficient Visual Pretraining with Contrastive Detection | Paper Explained
31:54
DINO: Emerging Properties in Self-Supervised Vision Transformers | Paper Explained!
31:19
DETR: End-to-End Object Detection with Transformers | Paper Explained
03:44
Channel Update: vacation, leaving Microsoft, approaching 10k subs and more!
33:27
DALL-E: Zero-Shot Text-to-Image Generation | Paper Explained
24:41
RMA: Rapid Motor Adaptation for Legged Robots | Paper Explained
24:44
AudioCLIP: Extending CLIP to Image, Text and Audio | Paper Explained
22:39
Focal Transformer: Focal Self-attention for Local-Global Interactions in Vision Transformers
21:01
Multimodal Few-Shot Learning with Frozen Language Models | Paper Explained
30:01
VQ-GAN: Taming Transformers for High-Resolution Image Synthesis | Paper Explained
34:38
VQ-VAEs: Neural Discrete Representation Learning | Paper + PyTorch Code Explained
21:08
GANs N' Roses: Stable, Controllable, Diverse Image to Image Translation | Paper Explained
38:41
Graphormer - Do Transformers Really Perform Bad for Graph Representation? | Paper Explained
36:06
Text Style Brush - Transfer of text aesthetics from a single example | Paper Explained
35:41
Chip Placement with Deep Reinforcement Learning | Paper Explained
45:55
Non-Parametric Transformers | Paper explained
23:14
When Vision Transformers Outperform ResNets without Pretraining | Paper Explained
17:46
DeepMind's Android RL Environment - AndroidEnv
28:00
MLP-Mixer: An all-MLP Architecture for Vision | Paper explained
26:38
Implementing DeepMind's DQN from scratch! | Project Update
27:56
EfficientNetV2 - Smaller Models and Faster Training | Paper explained
28:05
MuZero - Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model | RL Paper explained
55:27
OpenAI - Solving Rubik's Cube with a Robot Hand | RL paper explained
41:41
DeepMind's AlphaGo Zero and AlphaZero | RL paper explained
51:24
AlphaGo - Mastering the game of Go with deep neural networks and tree search | RL Paper Explained
51:04
DQN - Playing Atari with Deep Reinforcement Learning | RL Paper Explained
46:31
How to get started with Graph ML? (Blog walkthrough)
40:03
Graph Attention Network Project Walkthrough
08:14
Graph Neural Network Project Update! (I'm coding GAT from scratch)
39:28
Temporal Graph Networks (TGN) | GNN Paper Explained
53:07
OpenAI CLIP - Connecting Text and Images | Paper Explained
50:51
PinSage - Graph Convolutional Neural Networks for Web-Scale Recommender Systems | Paper Explained
43:37
Graph SAGE - Inductive Representation Learning on Large Graphs | GNN Paper Explained
50:04
Graph Convolutional Networks (GCN) | GNN Paper Explained
37:44
Graph Attention Networks (GAT) | GNN Paper Explained
38:45
Attention Is All You Need (Transformer) | Paper Explained
38:30
Google DeepMind's AlphaFold 2 explained! (Protein folding, AlphaFold 1, a glimpse into AlphaFold 2)
46:56
GPT-3 - Language Models are Few-Shot Learners | Paper Explained
24:57
Vision Transformer (ViT) - An image is worth 16x16 words | Paper Explained
31:46
Developing a deep learning project (case study on transformer)
20:12
How do transformers work? (Attention is all you need)
28:20
How to learn deep learning? (Transformers Example)
07:23
Cheapest (0$) Deep Learning Hardware Options | 2021