Channel Avatar

Robert Miles AI Safety @UCLB7AzTwc6VFZrBsO2ucBMg@youtube.com

156K subscribers - no pronouns :c

Videos about Artificial Intelligence Safety Research, for ev


45:59
AI Ruined My Year
09:24
Why Does AI Lie, and What Can We Do About It?
11:47
We Were Right! Real Inner Misalignment
18:05
Intro to AI Safety, Remastered
10:20
Deceptive Misaligned Mesa-Optimisers? It's More Likely Than You Think...
23:24
The OTHER AI Alignment Problem: Mesa-Optimizers and Inner Alignment
09:54
Quantilizers: AI That Doesn't Try Too Hard
11:44
Sharing the Benefits of AI: The Windfall Clause
16:29
10 Reasons to Ignore AI Safety
09:40
9 Examples of Specification Gaming
17:52
Training AI Without Writing A Reward Function, with Reward Modelling
10:22
AI That Doesn't Try Too Hard - Maximizers and Satisficers
13:41
Is AI Safety a Pascal's Mugging?
15:38
A Response to Steven Pinker on AI
11:32
How to Keep Improving When You're Better Than Any Teacher - Iterated Distillation and Amplification
15:27
Why Not Just: Think of AGI Like a Corporation?
13:46
Safe Exploration: Concrete Problems in AI Safety Part 6
03:47
Friend or Foe? AI Safety Gridworlds extra bit
07:23
AI Safety Gridworlds
06:47
Experts' Predictions about the Future of AI
10:36
Why Would AI Want to do Bad Things? Instrumental Convergence
01:04:40
Superintelligence Mod for Civilization V
13:03
Intelligence and Stupidity: The Orthogonality Thesis
05:03
Scalable Supervision: Concrete Problems in AI Safety Part 5
05:30
AI Safety at EAGlobal2017 Conference
05:20
AI learns to Create ̵K̵Z̵F̵ ̵V̵i̵d̵e̵o̵s̵ Cat Pictures: Papers in Two Minutes #1
10:41
What can AGI do? I/O and Speed
09:38
What Can We Do About Reward Hacking?: Concrete Problems in AI Safety Part 4
07:32
Reward Hacking Reloaded: Concrete Problems in AI Safety Part 3.5
05:51
The other "Killer Robot Arms Race" Elon Musk should worry about
06:56
Reward Hacking: Concrete Problems in AI Safety Part 3
05:51
Why Not Just: Raise AI Like Kids?
06:33
Empowerment: Concrete Problems in AI Safety part 2
03:23
Avoiding Positive Side Effects: Concrete Problems in AI Safety part 1.5
09:33
Avoiding Negative Side Effects: Concrete Problems in AI Safety part 1
10:13
Are AI Risks like Nuclear Risks?
05:04
Respectability
08:17
Predicting AI: RIP Prof. Hubert Dreyfus
07:04
What's the Use of Utility Functions?
07:45
Where do we go now?
01:26
Status Report
01:05
Channel Introduction