Channel Avatar

Herman Kamper @UCBu4J-JIs-UORp5pQ6M48nw@youtube.com

5.8K subscribers - no pronouns :c

More from this channel (soon)


13:04
Self-attention details (NLP817 11.3)
05:39
Attention recap (NLP817 11.2)
12:08
Intuition behind self-attention (NLP817 11.1)
09:27
Byte-pair encoding (BPE) (NLP817 2.6)
03:23
Stems and lemmas (NLP817 2.5)
05:30
Morphology (NLP817 2.4)
11:59
Words (NLP817 2.3)
07:41
Text normalisation and tokenisation (NLP817 2.2)
08:09
A first NLP example (NLP817 2.1)
14:53
What is natural language processing? (NLP817 1)
19:36
Edit distance (NLP817 2.7)
03:09
What should I read to learn about neural networks?
08:22
Neural networks examples: Natural language processing
06:40
Neural networks in practice
09:02
What is the difference between negative log likelihood and cross entropy? (in neural networks)
03:42
Backpropagation in general (now with forks)
13:46
Forks in neural networks
07:56
A general notation for derivatives (in neural networks)
07:18
Common derivatives for neural networks
06:56
Computational graphs and automatic differentiation for neural networks
04:03
Backprop for a multilayer feedforward neural network
31:02
Backpropagation (without forks)
03:40
Why is it called a neural network?
18:46
From logistic regression with basis functions to neural networks
06:06
Neural network preliminaries: Logistic regression, softmax and basis functions
04:12
Neural network preliminaries: Gradient descent
07:09
Neural network preliminaries: The chain rule for vector derivatives
04:38
Neural network preliminaries: Vector and matrix derivatives
01:04:25
AI, ChatGPT, and God (TGIF & KRUX 2023)
42:48
Using machine learning to assess final-year project reports (MML 2023)
22:51
Evaluating machine translation with BLEU (NLP817 10.8)
13:17
Attention - More general (NLP817 10.7)
21:30
Basic attention (NLP817 10.6)
18:12
Beam search (NLP817 10.5)
04:35
Greedy decoding (NLP817 10.4)
18:15
Encoder-decoder models in general (NLP817 10.3)
10:23
Training and loss for encoder-decoder models (NLP817 10.2)
13:08
A basic encoder-decoder model for machine translation (NLP817 10.1)
22:47
AAAI SAS 2022: Unsupervised speech segmentation (Invited Talk)
03:05
Interspeech 2021: Towards unsupervised phone & word segmentation using vector-quantized NNs
05:38
Dynamic time warping 3: Python code
17:35
Dynamic time warping 4: Aligning sequences of vectors
26:06
Dynamic time warping 2: Algorithm
12:03
Dynamic time warping 1: Motivation
22:05
Speech features intro 3: Mel-scale spectrogram
11:46
Speech features intro 4: Additional aspects
17:03
Speech features intro 2: Short-time Fourier transform
19:43
Speech features intro 1: (Fast) Fourier transform
09:55
Evaluation: Precision, recall example
18:13
Evaluation: Accuracy, precision, recall, F1
09:26
Preprocessing 2: Categorical features and categorical output
14:13
Preprocessing 1: Feature normalisation and scaling
14:48
Logistic regression 5.2: Multiclass - Softmax regression
05:16
Logistic regression 5.1: Multiclass - One-vs-rest classification
05:33
Logistic regression 4: Basis functions and regularisation
20:32
Logistic regression 3: The decision boundary and weight vector
10:46
Gradient descent 1: Fundamentals
07:15
Logistic regression 2: Optimisation
13:36
Logistic regression 1: Model and loss
07:46
Classification 4: Generative vs discriminative