Views : 3,384,927
Genre: Nonprofits & Activism
Date of upload: Apr 5, 2023 ^^
Rating : 4.816 (2,999/62,273 LTDR)
RYD date created : 2024-04-20T03:43:54.582356Z
See in json
Top Comments of this video!! :3
What scares me the most, is that a lot of people won't watch videos like these simply because of the time frame. Have tried to show it to a lot of people, but they don't think that they have the time to watch one hour educational videos on YouTube even though they do it every day on Netflix. How on earth are you to compete with short dopamine seeking content?
1.9K |
Considering the gravity of this topic, I really appreciate the calm and respectful nature of this presentation. No overt fear mongering (although the material speaks for itself), just trying to bring this to people's attention and help us process it. Even admitting that it will be hard to process and preparing us for that. And as a side note, you don't often see a presentation having 2 speakers but it worked really well. They really complemented each other and made it more engaging with the back and forth riffing on shared experiences
449 |
The rubber band was really intense when I first started exploring this stuff. Almost to the point that when I'd get out of the AI-world-headspace I was pleasantly surprised to see grass and trees and my house and my family and the normal world. People have said "What a time to be alive" ironically a zillion times, but hooooooly frak. "The Future" always seemed vaguely benign and ever distant, and now it is here and I still don't know how I feel about it.
96 |
No piece of media, story, news item, or any other type of information has ever scared me as much as this video has. This is not an indictment of the video or it's authors, I am really grateful to them for showing the depths that this is already plumbing. I can only echo the sentiment shared so many times in these comments.... Get this info as far and wide as possible as fast as possible.
98 |
I haven't finished the video but this hits close to home for me. In 2018, inside one of our graduate course discussions, we were concerned about the speed of AI development. If we let it continue without proper guardrail from the get-go we will be stuck in being reactive and not proactive in creating laws and measures. Seeing the speed of things moving, I think we are past that and will always be reactive. I'm not scared of the technology, but, the speed that's moving in.
14 |
@TheLionrazor
1 year ago (edited)
Hey all, manually went through the whole vid to summarize good quality chapter heads to click on. This info is too important.
If anyone wants to condense further from here, you're welcome!
Introduction and Talk start
0:49 Introduction: Steve Wozniak Introduces Tristan Harris and Aza Raskin
1:30 Talk begins: The Rubber band effect
3:16 Preface: What does responsible rollout look like?
4:03 Oppenheimer Manhattan project analogy
4:49 Survey results on the probability of human extinction
3 Rules of Technology
5:36 1. New tech, A New Class of Responsibilities
6:42 2. If a Tech confers power, it starts race
6:47 3. If you don't coordinate, the race ends in tragedy
First contact with AI: 'Curation AI' and the Engagement Monster
7:02 First contact moment with curation AI: Unintended consequences
8:22 Second contact with creation AI
8:50 The Engagement Monster: Social media and the race to the bottom
Second contact with AI: 'Creation AI'
11:23 Entanglement of AI with society
12:48 Not here to talk about the AGI apocalypse
14:13 Understanding the exponential improvement of AI and Machine Learning
15:13 Impact of Language models on AI
Gollem-class AIs
17:09 GLLMM: Generative Large Language Multi-Modal Model (Gollem AIs)
18:12 Multiple Examples: Models demonstrating complex understanding of the world
22:54 Security vulnerability exploits using current AI models, and identity verification concerns
27:34 Total decoding and synthesizing of reality: 2024 will be the last human election
Emergent Capabilities of GLLMMs:
29:55 Sudden breakthroughs in multiple fields and theory of mind
33:03 Potential shortcoming of current alignment methods against a sufficiently advanced AI
34:50 Gollem-class AI can make themselves stronger AI can feed itself
37:53 Nukes don't make stronger nukes: AI makes stronger AI
38:40 Exponentials are difficult to understand
39:58 AI is beating tests as fast as they are made
Race to deploy AI
42:01 Potential harms of 2nd contact AI
43:50 AlphaPersuade
44:51 Race to intimacy
46:03 At least we're slowly deploying Gollems to the public to test it safely?
47:07 But we would never actively put this in front of our children?
49:30 But at least there are lots of safety researchers?
50:23 At least the smartest AI safety people think there's a way to do it safely?
51:21 Pause, take a breath
How do we choose the future we want?
51:43 Challenge of talking about AI
52:45 We can still choose the future we want
53:51 Success moments against existential challenges
56:18 Don't onboard humanity onto the plane without democratic dialogue
58:40 We can selectively slow down the public deployment of GLLMM AIs
59:10 Presume public deployments are unsafe
59:48 But won't we just lose to China?
How do we close the gap?
1:02:28 What else can we do to close the gap between what is happening and what needs to happen?
1:03:30 Even bigger AI developments are coming. And faster.
1:03:54 Let's not make the same mistake we made with social media
1:03:54 Recap and Call to action
2.6K |