Channel Avatar

Alice Gao @UC_WNnOkr4IW_zSUWIeOeWbA@youtube.com

1.4K subscribers - no pronouns :c

- Assistant Professor, Teaching Stream at the University of


07:14
L23: Mixed Strategy Nash Equilibrium Question Slides 19-20
05:46
L23: Mixed Strategy Nash Equilibrium Part 2
11:59
L23: Mixed Strategy Nash Equilibrium Part 1
06:31
L23: Pareto Optimal Outcomes of the Prisoner's Dilemma Game Question Slide 11
05:48
L23: Nash Equilibrium of the Prisoner's Dilemma Game Question Slide 11
03:17
L23: Dominant Strategy Equilibrium of the Prisoner's Dilemma Game Question Slide 9
10:04
L23: The Prisoner's Dilemma Game
09:36
L23: Pareto Optimality
07:01
L22: The Nash Equilibrium Question Slide 30
11:15
L22: The Nash Equilibrium
03:49
L22: Dominant Strategy Equilibrium Question Slide 26
06:25
L22: Dominant Strategy Equilibrium Part 3
09:12
L22: Dominant Strategy Equilibrium Part 1
04:02
L22: Dominant Strategy Equilibrium Question Slide 22
08:27
L22: Dominant Strategy Equilibrium Part 2
10:54
L22: Applications of Game Theory
05:39
L22: Introducing Game Theory
07:31
L21: The Q-Learning Algorithm
06:08
L21: Temporal Difference Learning
03:38
L21: Properties of the Q-Learning Algorithm
08:32
L20: Introducing Reinforcement Learning and the Passive ADP Algorithm
04:58
L20: Passive ADP Example
12:14
L20: Active ADP Example
17:45
L19: Value Iteration Examples and Observations
14:30
L19: Policy Iteration Example
10:53
L19: The Policy Iteration Algorithm
08:14
L19: Introducing Policy Iteration
08:55
L19: The Value Iteration Algorithm
12:17
L19: Introducing the Bellman Equations
04:05
L18: Solving for the Optimal Policy Question Slide 8
12:59
L18: Solving for the Optimal Policy
12:03
L18: Optimal Policies of the Grid World Part 1
12:40
L18: Optimal Policies of the Grid World Part 2
10:43
L18: Introducing the Grid World
10:02
L18: Policy of a Markov Decision Process
08:55
L18: Discounted Reward
02:11
L18: MDP versus POMDP
06:46
L18: Introducing Markov Decision Process
19:00
L16: Calculate the Expected Utilities
10:33
L16: Making a Decision Using Variable Elimination Algorithm
14:45
L16: Constructing the Utility Function
05:55
L16: The Probability of an Accident
07:22
L16 Defining the Nodes
07:52
L16: Introduction to Decision Theory
18:42
L6 ML: Intro to Machine Learning
14:35
L6 ML: Intro to Supervised Learning Part 1
04:52
L7 DT: Examples of Decision Trees
18:13
L6 ML: Intro to Supervised Learning Part 2
07:39
L7 DT: Components of a DT and Classifying an Example
15:23
L7 DT: Grow a Full Tree
07:30
L7 DT: No Features Left
05:49
L7 DT: No Examples Left
16:56
L7 DT: Intro to Entropy
08:24
L7 DT: Calculate the Expected Information Gain
14:12
L7 DT: A Full Example
11:53
L7 DT: Handling Real-Valued Features Part 1
12:41
L7 DT: Handling Real-Valued Features Part 2
11:52
L7 DT: Dealing with Overfitting
03:31
L13 VEA: The Restrict Operation
09:05
L15 HMM Smoothing Calculations