Channel Avatar

Colin Reckons @UCkHMNtWMGvMgN5R81E8mYJg@youtube.com

12K subscribers - no pronouns :c

My name is Colin and I like to reckon. Free ideas.


14:35
Lecture 2.5 — What perceptrons can't do [Neural Networks for Machine Learning]
05:39
Lecture 1.4 — A simple example of learning [Neural Networks for Machine Learning]
08:24
Lecture 1.3 — Some simple models of neurons [Neural Networks for Machine Learning]
08:31
Lecture 1.2 — What are neural networks [Neural Networks for Machine Learning]
13:15
Lecture 1.1 — Why do we need machine learning [Neural Networks for Machine Learning]
07:29
Lecture 2.1 — Types of neural network architectures [Neural Networks for Machine Learning]
07:38
Lecture 1.5 — Three types of learning [Neural Networks for Machine Learning]
08:17
Lecture 2.2 — Perceptrons: first-generation neural networks [Neural Networks for Machine Learning]
06:25
Lecture 2.3 — A geometrical view of perceptrons [Neural Networks for Machine Learning]
05:04
Lecture 3.2 — The error surface for a linear neuron [Neural Networks for Machine Learning]
11:56
Lecture 3.1 — Learning the weights of a linear neuron [Neural Networks for Machine Learning]
05:10
Lecture 2.4 — Why the learning works [Neural Networks for Machine Learning]
03:57
Lecture 3.3 — Learning weights of logistic output neuron [Neural Networks for Machine Learning]
11:52
Lecture 3.4 — The backpropagation algorithm [Neural Networks for Machine Learning]
09:50
Lecture 3.5 — Using the derivatives from backpropagation [Neural Networks for Machine Learning]
12:34
Lecture 4.1 — Learning to predict the next word [Neural Networks for Machine Learning]
04:27
Lecture 4.2 — A brief diversion into cognitive science [Neural Networks for Machine Learning]
07:21
Lecture 4.3 — The softmax output function [Neural Networks for Machine Learning]
07:53
Lecture 4.4 — Neuro-probabilistic language models [Neural Networks for Machine Learning]
12:17
Lecture 4.5 — Dealing with many possible outputs [Neural Networks for Machine Learning]
04:41
Lecture 5.1 — Why object recognition is difficult [Neural Networks for Machine Learning]
05:59
Lecture 5.2 — Achieving viewpoint invariance [Neural Networks for Machine Learning]
16:02
Lecture 5.3 — Convolutional nets for digit recognition [Neural Networks for Machine Learning]
08:23
Lecture 6.1 — Overview of mini batch gradient descent [Neural Networks for Machine Learning]
17:45
Lecture 5.4 — Convolutional nets for object recognition [Neural Networks for Machine Learning]
11:39
Lecture 6.5 — Rmsprop: normalize the gradient [Neural Networks for Machine Learning]
13:16
Lecture 6.2 — A bag of tricks for mini batch gradient descent [Neural Networks for Machine Learning]
06:24
Lecture 7.2 — Training RNNs with back propagation [Neural Networks for Machine Learning]
17:24
Lecture 7.1 — Modeling sequences: a brief overview [Neural Networks for Machine Learning]
06:15
Lecture 7.3 — A toy example of training an RNN [Neural Networks for Machine Learning]
14:36
Lecture 8.2 — Modeling character strings [Neural Networks for Machine Learning]
12:25
Lecture 8.3 — Predicting the next character using HF [Neural Networks for Machine Learning]
09:38
Lecture 8.4 — Echo State Networks [Neural Networks for Machine Learning]
14:25
Lecture 8.1 — A brief overview of Hessian-free optimization [Neural Networks for Machine Learning]
11:45
Lecture 9.1 — Overview of ways to improve generalization [Neural Networks for Machine Learning]
06:23
Lecture 9.2 — Limiting the size of the weights [Neural Networks for Machine Learning]
07:32
Lecture 9.3 — Using noise as a regularizer [Neural Networks for Machine Learning]
10:50
Lecture 9.4 — Introduction to the full Bayesian approach [Neural Networks for Machine Learning]
03:32
Lecture 9.6 — MacKay 's quick and dirty method [Neural Networks for Machine Learning]
10:53
Lecture 9.5 — The Bayesian interpretation of weight decay [Neural Networks for Machine Learning]
07:28
Lecture 10.3 — The idea of full Bayesian learning [Neural Networks for Machine Learning]
08:36
Lecture 10.5 — Dropout [Neural Networks for Machine Learning]
06:45
Lecture 10.4 — Making full Bayesian learning practical [Neural Networks for Machine Learning]
13:16
Lecture 10.2 — Mixtures of Experts [Neural Networks for Machine Learning]
13:02
Lecture 11.1 — Hopfield Nets [Neural Networks for Machine Learning]
09:40
Lecture 11.3 — Hopfield nets with hidden units [Neural Networks for Machine Learning]
11:03
Lecture 11.2 — Dealing with spurious minima [Neural Networks for Machine Learning]
13:11
10.1 — Why it helps to combine models [Neural Networks for Machine Learning]
10:25
Lecture 11.4 — Using stochastic units to improve search [Neural Networks for Machine Learning]
11:45
Lecture 11.5 — How a Boltzmann machine models data [Neural Networks for Machine Learning]
10:55
Lecture 12.3 — Restricted Boltzmann Machines [Neural Networks for Machine Learning]
12:16
Lecture 12.1 — Boltzmann machine learning [Neural Networks for Machine Learning]
07:15
Lecture 12.4 — An example of RBM learning [Neural Networks for Machine Learning]
14:49
Lecture 12.2 — More efficient ways to get the statistics [Neural Networks for Machine Learning]
08:17
Lecture 12.5 — RBMs for collaborative filtering [Neural Networks for Machine Learning]
09:54
Lecture 13.1 — The ups and downs of backpropagation [Neural Networks for Machine Learning]
12:36
Lecture 13.2 — Belief Nets [Neural Networks for Machine Learning]
09:41
Lecture 14.2 — Discriminative learning for DBNs [Neural Networks for Machine Learning]
11:26
Lecture 13.3 — Learning sigmoid belief nets [Neural Networks for Machine Learning]
13:15
Lecture 13.4 — The wake sleep algorithm [Neural Networks for Machine Learning]