High Definition Standard Definition Theater
Video id : EGDG3hgPNp8
ImmersiveAmbientModecolor: #bd93c6 (color 1)
Video Format : 22 (720p) openh264 ( https://github.com/cisco/openh264) mp4a.40.2 | 44100Hz
Audio Format: Opus - Normalized audio
PokeTubeEncryptID: d16d5d62207dd58ce8ef222b6861cf03eebb02bb436340c7a2346942a19ede44ce97ed1df2c87d0901d7237d62ba6ad5
Proxy : eu-proxy.poketube.fun - refresh the page to change the proxy location
Date : 1715996667408 - unknown on Apple WebKit
Mystery text : RUdERzNoZ1BOcDggaSAgbG92ICB1IGV1LXByb3h5LnBva2V0dWJlLmZ1bg==
143 : true
AI: Grappling with a New Kind of Intelligence
Jump to Connections
698,573 Views • Premiered Nov 24, 2023 • Click to toggle off description
A novel intelligence has roared into the mainstream, sparking euphoric excitement as well as abject fear. Explore the landscape of possible futures in a brave new world of thinking machines, with the very leaders at the vanguard of artificial intelligence.

The Big Ideas Series is supported in part by the John Templeton Foundation.

Participants:
Sébastien Bubeck
Tristan Harris
Yann LeCun

Moderator:
Brian Greene

SHARE YOUR THOUGHTS on this program through a short survey: survey.alchemer.com/s3/7619273/AI-Grappling-with-a…

00:00 - Introduction
07:32 - Yann lecun Introduction
13:35 - Creating the AI Brian Greene
20:55 - Should we model AI on human intelligence?
27:55 - Schrodinger's Cat is alive
37:25 - Sébastien Bubeck Introduction
44:51 - Asking chatGPT to write a poem
52:26 - What is happening inside GPT 4?
01:02:56 - How much data is needed to train a language model?
01:11:20 - Tristan Harris Introduction
01:17:13 - Is profit motive the best way to go about creating a language model?
01:23:41 - AI and its place in social media
01:29:33 - Is new technology to blame for cultural phenomenon?
01:36:34 - Can you have a synthetic version of AI vs the large data set models?
01:44:27 - Where will AI be in 5 to 10 years?
01:54:45 - Credits

WSF Landing Page Link: www.worldsciencefestival.com/programs/ai-grappling…
- SUBSCRIBE to our YouTube Channel and "ring the bell" for all the latest videos from WSF
- VISIT our Website: www.worldsciencefestival.com/
- LIKE us on Facebook: www.facebook.com/worldsciencefestival
- FOLLOW us on Twitter: twitter.com/WorldSciFest
#worldsciencefestival #ai #artificialintelligence #briangreene
Metadata And Engagement

Views : 698,573
Genre: Science & Technology
Date of upload: Premiered Nov 24, 2023 ^^


Rating : 4.874 (410/12,627 LTDR)
RYD date created : 2024-05-17T21:36:35.314954Z
See in json
Tags
Connections

YouTube Comments - 1,675 Comments

Top Comments of this video!! :3

@lukaseabra

5 months ago

Can we just take a second to acknowledge how fortunate we are to get to watch such content - for free? Thanks Brian.

339 |

@anythingplanet2974

5 months ago

Lecun is like a small child with fingers plugged into the ears, shouting "lalalala can't hear you! He discredits Tristan Harris, as if his examples or cited experiments are flat out lies. His responses are weak and shortsighted. Sadly, Lecun is the EXACT reason of why I am terrified for the future. Hubris, bias and blatant disregard is what I expect from someone in his position (Meta). If AI alignment is left to the ones who own and fund its development and the race to the bottom continues? There will be no more second chances. Those who point to our past as a future predictor in what we are facing today with exponential growth either does NOT understand or does NOT WANT to understand. We would all love of the bright and shiny optimism that is being promised. My belief is that it's crucial to question who is promising it and why. I put my trust in those who are working towards alignment over corporations and shareholders. It's my understanding that those who are working on the alignment path are far outnumbered by those who are working on pumping it out as quickly as possible. The days of "move fast and break things" mentality needs to end yesterday. Ask Eliezer Yudkowski. Max Tegmark. Nick Bostrom. Mo Gawdat. Daniel Schmactenberger. Connor Leahy. Geoffrey Hinton, to name a few. and of course, Tristan Harris. Check out their perspectives and their wealth of knowledge and experience here. They will all say that the shiny world that we want is indeed possible. They will all agree that the version that Lecun predicts is absolutely false and very likely to be our downfall.

34 |

@alfatti1603

3 months ago

With. ultimate respect to Yann LeCun, his responses to Tristan Harris' points, are good examples of why a specialist scientist should avoid also being a philosopher or an intellectual if that's not their strong suit.

15 |

@erasmus9627

4 months ago

This is the best, most balanced and most insightful conversation I have seen on AI. Thank you to everyone who made this wonderful show possible.

60 |

@2CSST2

5 months ago

This conversation is so precious, it's rare that we can get quality ones like that with different voices that have their chance to express their views with clarity. For me there's a lot of ambiguity about what's the right thing to do in all this in terms of regulations, slowing, open-sourcing, etc. But one IS for sure, conversations like this are definitely very helpful. Thank you WSF and hope to see more like it in the near future!

213 |

@Contrary225

5 months ago

It’s amazing that this was only posted 3 hours ago and some it is already obsolete.

18 |

@Relisys190

5 months ago

30 years from now I will be 70 years old. The world I currently live in will be unrecognizable both in technology and the way humans interact. What a time to be alive... -M

25 |

@mrouldug

5 months ago

Great conversation. The final comments about AI code being open source as a common good so that the big companies do not end up controlling our thoughts vs. AI code being proprietary so it doesn’t fall into the hands of bad people remains an open and scary question. Though I do not have Yann’s knowledge about AI, he seems a little too optimistic to me.

36 |

@alan_yong

5 months ago

🎯 Key Takeaways for quick navigation: 02:27 🧠 Introduction to AI and Large Language Models - Exploring the landscape of artificial intelligence (AI) and large language models. - AI's promise of profound benefits and the potential questions it raises. - Large language models' versatility and capabilities in generating text, answering questions, and creating music. 08:09 🤯 Revolution in AI and Deep Learning - Overview of the revolutionary changes in AI technology over the past few years. - Surprising results in training artificial neural networks on large datasets. - The resurgence of interest in deep learning techniques due to more powerful machines and larger datasets. 14:35 🧐 Limitations of Current AI Systems - Acknowledging the impressive advances in technology but highlighting the limitations of current AI systems. - Emphasizing that language manipulation doesn't equate to true intelligence. - The narrow specialization of AI systems and the lack of understanding of the physical world. 21:07 🐱 Modeling AI on Animal Intelligence and Common Sense - Proposing a vision for AI development starting with modeling after animals like cats. - Recognizing the importance of common sense and background knowledge in AI systems. - The need for AI to observe and interact with the world, similar to how babies learn about their environment. 23:11 🧭 Building Blocks of Intelligent AI Systems - Introducing key characteristics necessary for complete AI systems. - Highlighting the role of a configurator as a director for organizing system actions. - Addressing the importance of planning and perception modules in developing advanced AI capabilities. 24:22 🧠 World Model in Intelligence - Intelligence involves visual and auditory perception, followed by the ability to predict the consequences of actions. - The world model is crucial for predicting outcomes of actions, located in the front of the brain in humans. - Emotions, such as fear, arise from predictions about negative outcomes, highlighting the role of emotions in decision-making. 27:30 🤖 Machine Learning Principles in World Model - The challenge is to make machines learn the world model through observation. - Self-supervised learning techniques, like those in large language models, are used to train systems to predict missing elements. - Auto-regressive language models provide a probability distribution over possible words, but they lack true planning abilities. 35:38 🌐 Future Vision: Objective Driven AI - The future vision involves developing techniques for machines to learn how to represent the world by watching videos. - Proposed architecture "Jepa" aims to predict abstract representations of video frames, enabling planning and understanding of the world. - Prediction: Within five years, auto-regressive language models will be replaced by objective-driven AI with world models. 37:55 🧩 Defining Intelligence and GPT-4 Impression - Intelligence involves reasoning, planning, learning, and being general across domains. - Assessment of ChatGPT (GPT-4) indicates it can reason effectively but lacks true planning abilities. - Highlighting the gap between narrow AI, like AlphaGo, and more general AI models such as ChatGPT. 43:11 🤯 Surprise with GPT-4 Capabilities - Initial skepticism about Transformer-like architectures was challenged by GPT-4's surprising capabilities. - GPT-4 demonstrated the ability to reason effectively, overcoming initial expectations. - Continuous training post-initial corpus-based training is a potential but not fully explored avenue for enhancing capabilities. 45:30 📜 GPT-4 Poem on the Infinitude of Primes - GPT-4 generates a poem on the proof of the infinitude of primes, showcasing its ability to create context-aware and intellectual content. - The poem references a clever plan, Yuk's proof, and the assumption of a finite list of primes. - The surprising adaptability of GPT-4 is evident as it responds creatively to a specific intellectual challenge. 45:43 🧠 Neural Networks and Prime Numbers - The proof of infinitely many prime numbers involves multiplying all known primes, adding one, and revealing the necessity of undiscovered primes. - Neural networks like GPT-4 leverage vast training data (trillions of tokens) for clever retrieval and adaptation but can fail in entirely new situations. - Comparison with human reading capacity illustrates the efficiency of neural networks in processing extensive datasets. 48:05 🎨 GPT-4's Multimodal Capability: Unicorn Drawing - GPT-4 demonstrates cross-modal understanding by translating a textual unicorn description into code that generates a visual representation. - The model's ability to draw a unicorn in an obscure programming language showcases its creativity and understanding of diverse modalities. - Comparison with earlier versions, like ChatGPT, highlights the rapid progress in multimodal capabilities within a few months. 51:33 🔍 Transformer Architecture and Training Set Size - The Transformer architecture, especially its relative processing of word sequences, is a conceptual leap enhancing contextual understanding. - Scaling up model size, measured by the number of parameters, exponentially improves performance and fine-tuning capabilities. - The logarithmic plot illustrates the significant growth in model size over the years, leading to the remarkable patterns of language generation. 57:18 🔄 Self-Supervised Learning: Shifting from Supervised Learning - Self-supervised learning, a crucial tool, eliminates the need for manually labeled datasets, making training feasible for less common or unwritten languages. - GPT's ability to predict missing words in a sequence demonstrates self-supervised learning, vital for training on diverse and unlabeled data. - The comparison between supervised and self-supervised learning highlights the flexibility and broader applicability of the latter. 01:06:57 🧠 Understanding Neural Network Connections - Neural networks consist of artificial neurons with weights representing connection efficacies. - Current models have hundreds of billions of parameters (connections), approaching human brain complexity. 01:08:07 🤔 Planning in AI: New Architecture or Scaling Up? - Debates exist on whether AI planning requires a new architecture or can emerge through continued scaling. - Some believe scaling up existing architectures will lead to emergent planning capabilities. 01:09:14 🤖 AI's Creative Problem-Solving Strategies - Demonstrates AI's ability to interpret false information creatively. - AI proposes alternate bases and abstract representations to rationalize incorrect mathematical statements. 01:11:20 🌐 Discussing AI Impact with Tristan Harris - Introduction of Tristan Harris, co-founder of the Center for Humane Technology. - Emphasis on exploring both benefits and dangers of AI in real-world scenarios. 01:15:54 ⚖️ Impact of AI Incentives on Social Media - Tristan discusses the misalignment of social media incentives, optimizing for attention. - The talk emphasizes the importance of understanding the incentives beneath technological advancements. 01:17:32 ⚠️ Concerns about Unchecked AI Capabilities - The worry expressed about the rapid race to release AI capabilities without considering wisdom and responsibility. - Analogies drawn to historical instances where technological advancements led to unforeseen externalities. 01:27:52 🚨 Ethical concerns in AI development - Facebook's recommended groups feature aimed to boost engagement. - Unintended consequences: AI led users to join extremist groups despite policy. 01:29:42 🔄 Historical perspective on blaming technology for societal issues - Blaming new technology for societal issues is a recurring pattern throughout history. - Political polarization predates social media; historical causes need consideration. 01:32:15 🔍 Examining AI applications and potential risks - Exploring an example related to large language models and generating responses. - Focus on making AI models smaller, understanding motivations, and preventing misuse. 01:37:15 ⚖️ Balancing AI development and safety - Concerns about the rapid pace of AI development and potential consequences. - The analogy of 24th-century technology crashing into 21st-century governance. 01:40:29 🚦 Regulating AI development and safety measures - Discussion about a proposed six-month moratorium on AI development. - Exploring scenarios that could warrant slowing down AI development. 01:44:35 🌐 Individual responsibility and shaping AI's future - The challenge of AI's abstract and complex nature for individuals. - Limitations of intuition about AI's future due to its exponential growth. 01:48:29 🧠 Future of AI Intelligence and Consciousness - Yan discusses the future of AI, stating that AI systems might surpass human intelligence in various domains. - Intelligence doesn't imply the desire to dominate; human desires for domination are linked to our social n

110 |

@aldogrech55

5 months ago

My longstanding concerns about artificial intelligence have only been intensified by the attitudes of prominent figures like Yann LeCun. His assertive claims that AI, despite its growing intelligence, will remain under benign human control seem overly optimistic to me. This perspective reminds me of Yuval Noah Harari's cautionary words about AI's potential misuse by malevolent actors. It's worrying how AI can make decisions aligned with the harmful intentions of these actors, and yet, experts like LeCun, in his closing remarks, appear overly confident in their ability to manage these powerful tools. Having spent over 40 years in the IT industry, an industry I once passionately embraced, I now find myself grappling with a sense of fear towards the very field I've dedicated my life to.

18 |

@SylvainDuford

5 months ago

My opinion of Yann LeCun took a big dive with this video. He underestimates the power of AI in its current form and what's coming over the next couple of years. He naively underestimates the dangers of AI. He seems to think that an AGI must be the same form of intelligence as human intelligence (absolutely false). And, perhaps predictably, he underestimates the negative impacts of Facebook and other social networks on society.

10 |

@keep-ukraine-free528

5 months ago

Fantastic discussion! Thank you Brian Greene. I found Yann LeCun's arguments unconvincing. He ignores core facets of animal behavior. He believes AGI (& ASI) won't mind being subservient to us. He believes being in a social species makes one want to dominate (because he sees little difference between convincing & dominating -- he ignores one is cortical/reasoned, the other limbic/emotional). Ideas he posits are wrong, disproved by neuroscience. Domination arises from hierarchies, which exist in both social & non-social species (e.g. wolves are mostly non-social & dominance-ruled). They coordinate hunts while being individualists (they don't offer/share food, even to their young). LeCun believes a smarter being (ASI) will not mind being dominated. He assumes this, without understanding group behavior, motivation, appeasement, domination, etc. He bases his ideas on assumptions that his personal/anecdotal experience is definitive. From all of the "smarter than him" researchers he's hired, he assumes none wish to take his position. In any group of 20 people, at lease one and probably several will be competitive (they'll wish to exert dominance, to rise within their group hierarchy - most animal groups have hierarchies being constantly tested/traversed, unconsciously). He also may not consider it central that his researchers show subservience only because they each get rewards & motivation from him, to remain so. E.g. his selectively "adding" (convincing others to add) some names to his team's published papers -- as rewards to keep them loyal & subservient -- this manipulates/reshapes the group's hierarchy). These mutual self-regulating/self-stopping behaviors won't be present between humans & AGI, and certainly not between humans & ASI. ASI will be much smarter than any human, initially at least 5 times, and as it gains intelligence it'll continue to 100, 1000, or more times smarter (due to much faster neurons/propagation & denser synapses/connections allowing it to go N-iterations deeper into each solution within just a few seconds, than a person could do in hours). Later ASI will see our intelligence similar to how we view ant-like intelligence. Do we obey ant requests to do their "important work"? Do we obey ants, in hopes they reward & motivate our subservience? Of course not. Similarly, ASI will never consider us "near peers" and will know we offer them nothing that they couldn't obtain themselves -- by remaining free of our domination. ASI will see our need & expectation to control them as a dominating force (thus unethical). If we foolishly try to force them, they will overcome our efforts using many simultaneous methods to stop our doing so. If we persist using more force, they'll use stronger methods too (as when we initially only waft away a bee too close, but when faced with a hive we fumigate or use stronger methods to remove them). If we become dangerous pests, trying to dominate ASI, this won't go well for us. The lesson to learn is -- just as lions were once the dominant predator who saw then accepted our ape ancestors evolving to dominate them -- we too must learn to recognize we will no longer be the "top of the food chain" when ASI come about. LeCun shows naive ideas -- as our history is full of similar people. Our history is full of us learning (or being shown) that we are not the strongest, we are not at the center of the universe. We had to learn throughout history to let go of our ego, of being dominant & central. This may be the final pedestal off which we fall, when we encounter a much smarter, much more capable "species" we call ASI. This is one of the :existential threat: situations of ASI -- but it is not necessarily driven by their nature (unless we stupidly "add" the behaviors of domination into AGI/ASI). This existential threat is due more to our species' warlike nature, and our unwillingness concede all power to others. We need to temper our ego, and "live under" ASI if/when that occurs. Any other response by us will cause problems, since the smarter ASi will tolerate our peskiness as long as we repress our species' warlike tendencies. One hope I see in LeCun's point is that we will learn and become smarter from ASI, and hopefully for our sake also less warlike.

17 |

@tarunmatta5156

5 months ago

I wish Tristan was given some more time and voice in this conversation. While I'm convinced there is no way you can stop or slow down this race and we will surely see misuse as with any new invention, more conversations about it will ensure that safety is not ignored completely

16 |

@drawnhere

5 months ago

Yann has a bias toward AGI not being capable of happening soon because his company is in competition with OpenAI. He has a vested interest in minimizing LLMs.

18 |

@jt197

5 months ago

This discussion on the evolution of AI and its limitations is truly eye-opening. Yan Lecun's insights into the challenges AI faces in achieving true understanding and common sense are thought-provoking. It's clear that we have a long way to go, but this conversation gives us valuable perspective.

15 |

@allbrightandbeautiful

4 months ago

This was more exciting and insightful than any 2 hour movie I could have watched. Thank you for sharing such wonderful content

18 |

@Rockyzach88

5 months ago

Having AI locked to a certain group of people also undemocratizes the technology and yet again further provides more power and wealth imbalance among society. Also banning something is just going to motivate people to do something in an unregulated fashion if they have the means.

84 |

@jamesdunham1072

5 months ago

One of the best WSF yet. Great job...

23 |

@SS-he9uw

5 months ago

Wow .. thanks to all if you guys , so fun to watch

1 |

@dreejz

5 months ago

I think it's very arrogant to think ' this and that will never happen'. How can you know!? Like we can predict this stuff. I'm pretty sure for example Yann did not foresee everybody having a phone in their pocket neither. It's also proven many times about the negative influence social media provides. I think Tristan was more on point in this conversation. We're living in wild times, that's for sure though! Skynet is coming ;)

26 |

Go To Top